Technology

DeepMind’s new robots learned how to teach themselves

The minute hand on the robot's apocalypse clock is getting closer to midnight. DeepMind Google's sister company responsible for the smartest artificial intelligence on the planet, just taught machines how to understand things by themselves.

The robots are not very good at exploring on their own. The AI ​​that only exists to analyze the data, such as the neural networks that decide whether something is a hot dog or not, has relatively little to focus on the near-total number infinite of things that a physical robot must understand.

To solve this problem, DeepMind has built a new paradigm for learning AI-powered robots called "Scheduled Auxiliary Control" (19459008) SAC-X ). "This new paradigm gives robots a simple goal:" and the reward for completion.

Credits: DeepMind

According to a blog post by DeepMind :

The auxiliary tasks that we define follow a general principle: they encourage the agent to explore his detection space. For example, activate a touch sensor in his fingers, detect a force in his wrist, maximize the angle of articulation in his proprioceptive sensors or force the movement of an object in his visual sensors.

The researchers do not tell the robot how to complete the task, they simply equip it with sensors (which are first extinguished) and let it fumble until everything goes well.

Credits: DeepMind

By exploring his environment and testing the functionality of his sensors, the robot can finally earn its reward: a point. If it fails, it is useless.

Watching a robot arm fumble in a box may not seem impressive at first, especially if you've seen similar robots building furniture . But the amazing part is that this particular machine does not follow a program or does something for which it was designed. It's just a robot trying to understand how to make a human happy.

And this work is important: it will change the world if DeepMind or another AI company can perfect it. At the present time, there is no robot that could walk / roll in a strange house and tidy up. Making a bed, emptying trash cans or putting a pot of coffee are extremely complex tasks for AI. There is an almost infinite number of ways each task could be executed – more if the robot is allowed to use flamethrowers and make insurance claims.

At the end of the day we are still far from "Rosie the Robot" from " The Jetsons ." But, if DeepMind has anything to say about it, let's go there. And it all starts with a robot arm learning to play with blocks on its own.

Want to know more about AI by the world's best experts? Join our machine: Learners will follow TNW Conference 2018 . Buy your tickets here .

To read further:

Someone did a virtual reality experiment where you are George W Bush in a bathtub

40 Replies to “DeepMind’s new robots learned how to teach themselves

Leave a Reply

Your email address will not be published.