Ibrahim Diallo was reportedly fired by a machine. Recent reports recounted the growing frustration that he felt while his security pass was no longer working, that his connection to the computer system was disabled, and that the security personnel were not there. had finally run away. His managers were unable to offer an explanation and helpless to reverse the system.
Some might think that it was a taste of things to come because artificial intelligence has more power over our lives. Personally, I drew the opposite conclusion. Diallo was sacked because a former manager had not renewed his contract on the new computer system and various automated systems then clicked into action. The problems were not caused by AI, but by its absence.
The systems did not have any knowledge-based intelligence, which meant that they did not have a template designed to encapsulate knowledge (such as human resource expertise) in the form rules, texts and logical links. Similarly, the systems showed no computational intelligence – the ability to learn from data sets – such as recognizing the factors that could lead to referral. In fact, it seems that Diallo was fired as a result of an old-fashioned and poorly designed system, triggered by human error. The AI is certainly not to blame – and this could be the solution.
The conclusion I would draw from this experience is that some human resource functions are ripe for AI automation, especially since, in this case, stupid automation is not an option. is revealed so inflexible and ineffective. Most large organizations will have a staff manual that can be coded as an automated expert system with explicit rules and templates. Many companies have created such systems in a range of areas involving specialized knowledge, not just human resources.
But a more practical AI system could use a mix of techniques to make it smarter. The way the rules should be applied to the nuances of actual situations could be learned from the company's HR records, in the same way, common law legal systems like the previous use of the England established by earlier cases. The system could revise its reasoning as more evidence is available in any case using what is called " Bayesian Update ". An artificial intelligence concept called " Fuzzy Logic " could interpret situations that are not in black and white, applying evidence and conclusions to varying degrees to avoid the decision-making process that has leads to the dismissal of Diallo.
More computer says no. Shutterstock
The need for several approaches is sometimes overlooked in the current surge of enthusiasm for "deep learning" algorithms complex artificial neural networks inspired by the human brain. can recognize patterns in large datasets. As this is all they can do, some experts are now discussing for a more balanced approach. In-depth learning algorithms are excellent for pattern recognition, but they certainly do not show a deep understanding.
The use of AI in this way would probably reduce errors and, when they occur, the system could develop and share lessons with the corresponding AI in other companies so that similar mistakes are avoided in the future. This is something that can not be said for human solutions. A good human manager will learn from his mistakes, but the next manager will probably repeat the same mistakes.
So, what are the disadvantages? One of the most striking aspects of the Diallo experience is the lack of humanity shown. A decision was made, even if it was wrong, but it was not communicated or explained. An AI can make fewer mistakes, but would it be better to communicate its decisions? I think the answer is probably not.
Losing one's job and livelihood is a stressful and emotional moment for anyone except the most frivolous employees. It is a time when sensitivity and understanding are required. So, for my part, I would definitely find the essential human touch no matter how convincing the AI chatbot is.
A fired employee may have the impression of having been wronged and may wish to challenge the decision through a court of law. This situation raises the question of who was responsible for the initial decision and who will defend it in law. This is certainly the time to address the legal and ethical issues raised by the rise of AI, while it is still in its infancy.
This article was originally published on The Conversation . Read the original article .