Technology

We should teach human rights law to software engineers

Artificial Intelligence (AI) is making its way into more and more aspects of our daily lives. It powers smart assistants on our mobile phones and our "virtual home helpers". These are algorithms designed to improve our health diagnoses. And it is used in predictive police tools used by the police to fight crime.

Each of these examples raises potential problems in the protection of human rights. Preventive police, if not properly designed, may result in discrimination based on race, sex or ethnicity.

The rules of confidentiality and data protection apply to information relating to our health. Similarly, the systematic registration and use of the geographical position of our smartphones may violate the rules of protection of privacy and personal data which could raise concerns about digital surveillance by public authorities.

Software engineers are responsible for designing the algorithms underlying all these systems. It is software engineers that enable smart assistants to more accurately answer our questions, help physicians better detect health risks, and police officers to better identify pockets of increasing crime risk.

Software engineers generally do not receive training in human rights law. Yet with every line of code, they can well interpret, apply and even violate key concepts of human rights law – without even knowing it.

That is why it is crucial that we teach human rights law to software engineers. Earlier this year, new EU regulations forced companies to become more open to consumers about the information they hold. Known as GDPR you may remember it as many desperate emails asking you to choose to stay in various databases.

The GDPR has increased restrictions on what organizations can do with your data and extends the rights of individuals to access and control their data. These developments towards privacy protection from conception and data protection from conception offer great opportunities for integrating legal frameworks with technology. On their own, however, they are not enough.

For example, better knowledge of human rights law can help software developers understand what indirect discrimination is and why it is prohibited by law. (Any discrimination based on race, color, sex, language, religion, public or political opinion, national or social origin, property, association with a national minority , birth or other status is prohibited under Article 14 of the European Convention on Human Rights .)

There is direct discrimination when an individual is treated less favorably for one or more of these protected grounds. Indirect discrimination occurs when a seemingly neutral rule results in less favorable treatment of an individual (or a group of individuals).

Likewise, understanding the subtleties of the right to a fair trial and its corollary, the presumption of innocence, can help make more informed choices in the design of algorithms.

This could help avoid the possibility that algorithms assume that the number of arrests by the police in a multi-ethnic neighborhood is correlated with the number of actual criminal convictions.

More importantly, it would help them develop unbiased data sets that are not substitutes for discrimination based on ethnicity or race.

For example, wealth and income data combined with geographic location data can be used as an indirect indicator for identifying populations belonging to a certain ethnic group if they tend to focus in a particular district .

Legal Code

Similarly, a better understanding of the functioning of human rights legal frameworks could stimulate the creation of solutions to strengthen compliance with legal rules.

For example, there is a compelling need for technological solutions allowing individuals to easily challenge IA-based decisions made by public authorities that directly affect them. This could be the case of parents who would be wrongly identified as potential child molesters by opaque algorithms used by local authorities .

The Key to Equitable Technology (Credit: Shutterstock)

Such solutions could also be of interest to the private sector. For example, decisions about insurance premiums and loans are often determined by profiling and scoring algorithms hidden behind black boxes . Full transparency and disclosure of these algorithms may not be possible or desirable due to the nature of these business models.

Thus, a solution in accordance with the appropriate procedure could allow individuals to easily challenge such decisions before accepting an offer.

As our contemporary societies evolve inexorably toward intensive AI applications, we must keep in mind that the human beings behind the curtain of the IA have the power to make (erroneous) decisions that affect us all. ]

It is high time that resources and energy be used to educate them not only in advanced technologies, but also in applicable human rights rules.

This article is republished from The Conversation of Ana Beduschi Lecturer in Law, of the University of Exeter under a Creative Commons license. Read the original article .

Leave a Reply

Your email address will not be published.