Technology

Google teaches AI to fool humans so it can learn from our mistakes

Deceiving robots to see things that are not there or categorizing them completely, it's fun and fun until someone is beheaded because the autopilot is not there. a car thought that a white truck was a cloud .

In order to avoid such tragedies, it is extremely important that researchers in the field of artificial intelligence understand the very nature of these simple attacks and accidents. This means that computers will have to be smarter. That's why Google is simultaneously studying the human brain and neural networks.

Up until now, neuroscience has informed the field of artificial intelligence through efforts such as the creation of neural networks. The idea is that what does not deceive a person should not be able to deceive an AI.

A Google research team, which includes Ian Goodfellow, who literally wrote the book on In-depth Learning, recently published his white paper : "Contradictory Examples That at the same time deceive Computer Vision. "The work highlights that the methods used to fool AI into the wrong classification of an image do not work on the human brain. He postulates that this information can be used to make neural networks more resilient.

Last year when a group of MIT researchers used an accusatory attack against a Google AI, all they had to do was incorporate code simple in an image. In doing so, this team convinced an advanced neural network that he was looking at a rifle, while in fact he was seeing a turtle. Most children over the age of three would have known the difference.

Credits: MIT

The problem is not with Google's AI, but with a simple flaw that all computers have: a lack of eyeballs. Machines do not "see" the world, they simply process images – and it makes it easier to manipulate parts of an image that people can not see to deceive them.

To solve the problem, Google tries to understand why humans are resistant to certain forms of image manipulation. And perhaps most importantly, he tries to discern exactly what it takes to deceive a person with an image.

According to the white paper published by the team:

If we knew conclusively that the human brain could withstand a certain class of conflicting examples, this would provide evidence of existence for a similar mechanism in the safety of machine learning.

Credit: Google Left: Original image of a cat. Right: The same image has been successfully manipulated to deceive humans into believing that they are seeing a dog.

In order to make the cat look like a dog, the researchers zoomed in and tampered with some of the details. Chances are, it comes in at a glance, but if you look at it for more than a few seconds, it's obvious that it's a faked image. The point that researchers make is that it's easy to deceive humans, but only in some ways.

Credits: Google

Right now, people are the undisputed champions of image recognition. But in 2018, completely autonomous cars will be launched on the roads of the whole world. The AI ​​who can "see" the world and all the objects that are there is a matter of life and death.

Want to know more about AI by the world's best experts? Join our machine: Learners Follow TNW Conference 2018 . Buy your tickets here .

Leave a Reply

Your email address will not be published.