Technology

Amazon’s sexist hiring algorithm could still be better than a human

Amazon decided to close its artificial intelligence recruitment (AI) tool after discovering that it was discriminatory to women.

The company has created the tool to browse the web, identify potential candidates and rank them from one to five stars. But the algorithm has learned to systematically downgrade women's CVs for technical jobs such as software developer.

Although Amazon is at the forefront of AI technology, the company has not found a way to make its algorithm gender-neutral. But the failure of society reminds us that AI develops a bias from various sources.

Although there is a common belief that algorithms are supposed to be constructed without any prejudices or prejudices that color human decision-making, the truth is that one algorithm can unwittingly learn a bias from various sources.

Anything, data used to train people who use it, and even seemingly unrelated factors, can all contribute to the bias of

IA.

Artificial intelligence algorithms are trained to observe trends in large sets of data to help predict results. In the case of Amazon, his algorithm used all CVs submitted to the company over a period of ten years to learn how to spot the best candidates.

Given the low proportion of women working in the business, as in most companies in the technology sector the algorithm quickly detected male dominance and thought that & # 39; He was a factor of success.

As the algorithm used the results of its own predictions to improve its accuracy, it found itself stuck in a sexism pattern with regard to the candidates.

And as the data used to train it was created at one point by humans, this means that the algorithm also inherited unwanted human traits, such as bias and discrimination, which also constitute a recruitment problem in the years . years.

Some algorithms are also designed to predict and provide what users want to see. This is usually seen on social media or in online advertisements, where users see content or advertisements with which an algorithm thinks they will interact with . Similar trends have also been reported in the recruitment industry.

A recruiter reported that while using a professional social network to find candidates, the AI ​​learned to give him results very similar to the profiles with which he was able to work. was initially engaged.

As a result, entire groups of potential candidates were systematically removed from the recruitment process.

However, a bias also appears for other unrelated reasons. A recent study of how an algorithm broadcast ads promoting STEM jobs showed that men were more likely to see the announcement, not because men were more likely to click on them. , but because women were more expensive to advertise

As companies set ads targeting women at a higher rate (women drive from 70% to 80% of all purchases made by consumers), the algorithm chose Show ads more to men than women because it's designed to: maximize ad delivery while keeping costs down.

But if an algorithm only reflects the characteristics of the data we provide to it, what its users like and the economic behavior that occurs in its market, is not it wrong to blame it for perpetuating our worst attributes?

We automatically expect an algorithm to make decisions without discrimination, which is rarely the case in humans. Even if an algorithm is biased, it may represent an improvement over the current status quo.

Recruitment algorithms also showed a bias (Credit: waverbreakmedia / Shutterstock)

To take full advantage of the use of AI, it is important to research what would happen if we allowed AI to make decisions without human intervention.

A study of 2018 examined this scenario with bail decisions using an algorithm formed from historical criminal data to predict the likelihood of re-offending criminals. According to a projection, the authors were able to reduce the crime rate by 25% while reducing the number of cases of discrimination against incarcerated prisoners.

Yet the gains highlighted in this research would only occur if the algorithm actually made all the decisions. This is unlikely to happen in reality, as judges would probably prefer to choose whether or not to follow the algorithm's recommendations. Even if an algorithm is well designed, it becomes redundant if people choose not to use it.

Many of us rely on algorithms to make our daily decisions, whether to look at what you are watching on Netflix or to buy from Amazon. But research shows that people lose trust in algorithms faster than humans when they see them go wrong, even when the algorithm works better overall.

For example, if your GPS suggests you use another route to prevent traffic from taking longer than expected, it is likely that you will not use your GPS in the future.

But if you have decided to choose another route, it is unlikely that you will stop trusting your own judgment. A follow-up study on overcoming aversion to algorithms even showed that people were more likely to use an algorithm and accept its errors if it was gave the possibility of modifying it itself, even if it meant making it flawed.

While humans may quickly lose confidence in faulty algorithms, many of us tend to trust more machines if they have human characteristics. According to a study on self-driving cars humans were more likely to trust the car and thought it would work better if the vehicle's reinforced system had a name, a gender and a sounding voice human.

However, if machines look a lot like humans, but not quite, people often find them scary which could affect their confidence in .

Although we do not necessarily appreciate the image that algorithms may reflect of our society, it seems that we always want to live with them and make them look like and act like us. And if so, can algorithms also make mistakes?

This article is republished from The Conversation of Maude Lavanchy Research Assistant, of the IMD Business School under a Creative Commons license. Read the article original .

Leave a Reply

Your email address will not be published.