Technology

We need an ‘AI sidekick’ to fight malicious AI

"Now that we realize that our brains can be hacked, we need an antivirus for the brain." These were the words of Yuval Noah Harari, renowned historian and virulent critic of Silicon Valley.

The phrase, which was part of from a recent interview of Nick Thompson, Wired with Harry Harari and the former ethical designer of Google, Tristan Harris, was a reference to how technology companies use artificial intelligence algorithms to manipulate user behavior in a cost-effective manner .

For example, if you are viewing NBA game recap videos, YouTube will recommend more videos from the NBA. The more videos you watch, the more ads can appear on YouTube, and the more ad impressions you earn.

This is essentially the business model used by all "free" applications. They try to keep you glued to the screen without worrying about the impact it will have on your physical and mental health.

And they use the most advanced technologies and the brightest minds to achieve that goal. For example, they use in-depth learning and other artificial intelligence techniques to monitor your behavior and compare it to that of millions of other users in order to provide you with over-the-top recommendations. Customized which you can hardly resist.

So yes, your brain can be hacked . But how do you build the antivirus mentioned by Harari? "It can work on the basis of the same technology," said Harari. "Let's say you have an AI mate watching you all the time, 24 hours a day, what you write, what you see, everything.

But this AI serves you, has this fiduciary responsibility. And it learns about your weaknesses, and by knowing your weaknesses, it can protect you from other agents who are trying to hack you to exploit your weaknesses. "

While Harari was presenting the concept of "AI partner," Harris, a veteran engineer, nodded, which speaks volumes about the realism of the idea.

For example, if you have a soft spot for fun videos on cats, for example, and you can not help but watch them, you, your AI sidekick, need to step in and "feel" that a smart artificial intelligence system is trying to exploit you would show a message about a blocked threat, says Harari.

In summary, Harari's partner for AI must accomplish the following tasks:

He must be able to monitor all your activities
He must be able to identify your weaknesses and find out what is good for you.
It must be able to detect and block an artificial intelligence agent exploiting your weaknesses.

In this post, we want to know what should be done to create the friend of AI, Harari, and if this is possible with contemporary technology.

An AI sidekick who oversees all your activities

Harari's first requirement for the IA Protective Teammate is that he sees everything you see. This is a just principle since, as we know, the IA are very different from human intelligence and rely too much on quality data.

A human "buddy" – for example, an older relative or brother – would be able to distinguish between true and false based on their personal experiences. They have an abstract model of the world and a general perception of the consequences of human actions. For example, they will be able to predict what will happen if you watch too much television and exercise little.

Unlike humans, artificial intelligence algorithms start with a blank slate and have no notion of human experiences. The state-of-the-art artificial intelligence technology is the deep learning IA technique particularly effective in finding models and correlations in large datasets.

As a rule of thumb, the more quality data you provide to an in-depth learning algorithm, the easier it will be to categorize new data and make predictions.

Now the question is how to create an in-depth learning system that can control everything you do. Currently, there is none.

With the explosion of the cloud and of the Internet of Things (IoT) technology companies, cybercriminals and government agencies have many new ways to open windows on our daily lives, collect data and monitor our activities. However, thankfully, none of them has access to all our personal data.

Google has a very broad view of your online data including the history of your searches and your navigation, applications that you install on your devices Android, your Gmail data, and your Google Docs content. and your YouTube viewing history.

However, Google does not have access to your Facebook data which includes your friends, likes, clicks, and other engagement preferences.

Facebook has access to some of the sites you visit, but does not have access to your Amazon shopping and shipping data . Thanks to his popular smart speaker Echo, Amazon knows a lot about your home activities and your preferences, but does not have access to your Google data.

The fact is, even if you give a lot of information to technology companies, no company has access to all this information. In addition, a lot of information has not been digitized yet.

For example, Harari frequently discusses how the AI ​​could quantify your reaction to a given image by monitoring changes in your pulse rate when you view that image.

But how are they going to do that? Harari says tech companies will not necessarily need a portable device to capture your heart rate and they can do it with a high-resolution video stream of your face and monitor changes in your retina. But that has not happened yet.

Also, many of the online activities we do are influenced by our experiences in the physical world, such as conversations we have with colleagues or things we have heard in class.

It may be a billboard that I saw while waiting for the bus or a conversation between two people that I heard distractedly in the subway. This could have to do with the quality of sleep that I had the previous night or the amount of carbs I had for breakfast.

Now the question is how to give an AI agent all our data. With the current technology, you will need a combination of hardware and software.

For example, you will need a smart watch or fitness tracker to allow your AI companion to monitor your vital signs during your various activities. You will need a helmet to follow your eyes and to find the correlation between your vital signs and your field of vision. what you see.

Your AI assistant will also have to live in your computer devices, your smartphone and your laptop. It will then be able to record relevant data on all the activities you perform online. By gathering all this data, your artificial intelligence companion will be better placed to identify problematic behavior patterns.

These requirements pose two problems. First, the material costs will only make the AI ​​buddy available to a small audience, probably to the rich elite of Silicon Valley who understands the value of such an assistant and who is willing to to bear the financial costs.

However, as studies have shown, the most exposed people are not the rich elite, but the poorest . Mobile screens and Internet pay are less informed about the harmful effects of time spent in front of a screen. They will not be able to afford the AI ​​buddy.

The second problem is storing all the data you collect on the user. Having so much information in one place can give you a good idea of ​​your behavior. But it will also give anyone who gets unauthorized access to it an incredible leverage to use it for perverse purposes.

Who will you trust with your most sensitive data? Google? Facebook? Amazon? None of these companies have a positive reputation keeping in mind the best interests of its users. Harari mentions that your AI sidekick has a fiduciary duty. But which commercial enterprise is willing to pay the costs of storing and processing your data without getting something back?

Should the government keep your data? And this prevents government authorities from not using it for perverse purposes such as surveillance and manipulation.

We may want to try using a combination of blockchain and cloud services to ensure that alone fully controls your data and we can use IA decentralized. models to prevent any entity from having exclusive access to the data. But that still does not eliminate the costs of storing data.

The entity may be a non-profit organization with significant funding from the government and the private sector. Alternatively, it can opt for a monetized economic model. Basically, this means that you will have to pay a subscription fee for the service to store and process your data, but this will make the AI ​​buddy even more expensive and less accessible to the more vulnerable underprivileged classes.

Final verdict: An AI companion who can collect all your data is not impossible, but it is very difficult and expensive and will not be available to everyone.

An acolyte of the AI ​​able to detect your weaknesses

This is where Harari's proposal poses its biggest challenge. How can your sidekick distinguish what is good or bad for you? The short answer is: it can not.

Current mixtures of artificial intelligence are considered AI narrow which means that they are optimized for specific tasks such as image classification, voice recognition, detection of abnormal Internet traffic or content suggestion. users.

Distinguishing human weaknesses is not a small task. There are too many parameters, too many moving parts. Each person is unique, influenced by countless parameters and experiences. A repeated task that could prove detrimental to one person could be beneficial to another person. In addition, weaknesses may not necessarily occur in repeated actions.

This is what deep learning can do for you: it can find trends in your actions and predict your behavior. For example, AI-based recommendation systems keep you active on Facebook, YouTube, and other online applications.

For example, your artificial intelligence companion can learn that you are very interested in diet videos, or that you read too many liberal or conservative news sources. He may even be able to correlate these data points with all other information, such as the profiles of your classmates or colleagues.

This could associate your actions with other experiences you had during the day, such as running an ad on a bus stop. But the distinction between models does not necessarily lead to "detecting weaknesses".

It is not possible to determine which types of behavior are causing you harm, especially since many are long-term, and may not necessarily be related to changes in your vital signs or symptoms. other distinguishable actions.

These are the kinds of things that require human judgment, which is sorely lacking in deep learning . Detecting human weaknesses falls within the realm of general AI, also called artificial or human intelligence. But general artificial intelligence is still the essence of novels and movies of myths and science-fiction, even though some parties like to over-type the capabilities of contemporary AI .

Theoretically, you can engage a group of humans to label repeated motives and report those that are detrimental to users. But this would require a huge effort involving cooperation between engineers, psychologists, anthropologists and other experts, because mental health trends differed from one population to the next depending on the history, culture, religion and many other factors.

What you will have at best is an AI agent who can detect your behavior and show you – or a "human acolyte" who can distinguish between those who can harm you. In itself, it is a fairly interesting and productive use of current recommendation systems. In fact, several researchers are working on who can follow codes of ethics and rules instead of seeking maximum commitment.

An AI sidekick who can prevent other AI from hacking your brain

Blocking AI algorithms that take advantage of your missing weaknesses will largely depend on their knowledge. So, if you can reach the number two goal, reach the third goal will not be very difficult.

But we will have to tell our assistant exactly what "hack his brain". For example, if you watch a video at a chat, it does not matter, but if you watch three consecutive videos or spend 30 minutes watching them, your brain has been hacked.

Therefore, blocking cerebral hacking attempts by malicious AI algorithms might not be as simple as blocking malware threats. But for example, your AI assistant can warn you that you have spent the same thing for the last 30 minutes. Or better yet, he can warn your human assistant and allow him to decide whether it is time to interrupt your current activity.

Moreover, your artificial intelligence companion can inform you, or your trusted human assistant, that he thinks that the reason you searched for and read the reviews of a certain gadget for some time may be linked to several others, offline or online. advertisements you saw earlier or a conversation you may have had in front of the water fountain at work.

This might give you an idea of ​​the influences you have distractedly picked up and that you may not be aware of. It can also help in areas where brain influence and piracy do not involve repetition of actions.

For example, if you buy a particular item for the first time, your artificial intelligence companion can warn you that you've been bombarded with ads on that particular item and suggest you rethink before you make the purchase.

Your artificial intelligence companion can also provide you with a detailed report on your behaviors, such as the new on iOS, which shows the time spent on your phone. and what applications you used. Similarly, your artificial intelligence assistant can explain to you how different topics occupy your daily activities.

But you, or a trusted friend of your parent, will ultimately have to decide which activities to block or allow.

Final Verdict

Harari's idea of ​​an AI sidekick is an interesting idea. At its heart, he suggests modifying the current AI-based recommendation models to protect users against cerebral piracy. However, as we have seen, there are real obstacles to creating such a companion.

First, creating an AI system that can monitor all your activities is expensive. And second, protecting the human spirit from harm is something that requires human intelligence.

That being said, I'm not saying that artificial intelligence can not help you protect against brain piracy. If we look at it from the point of view of augmented intelligence there could be a common ground that can both be accessible to all and help to better equip us all against the handling of AI.

The idea behind augmented intelligence is that artificial intelligence agents are meant to complement and enhance the skills and decisions of humans, not to fully automate and remove them from the cycle. This means that your AI assistant is supposed to educate you about your habits and let a human (that it behave yourself, a brother, a friend) or a parent) decide what is best for you.

With this in mind, you can create an artificial intelligence agent that requires less data. You can undress wearables and smart glasses that will record everything you do offline and limit your artificial intelligence assistant to monitor online activities on your mobile devices and computers. He can then give your reports information about your habits and behaviors and help you make the best decisions.

This will make the artificial intelligence assistant much more affordable and accessible to a wider audience even though it might not be able to provide as much information as access to portable data. You will still have to consider the costs of storage and processing, but these will be much lower and could probably be covered by a government health population grant.

AI assistants can be a good tool to help detect brain hacking and damaging online behavior. But they can not replace human judgment. It's up to you and your loved ones to decide what's best for you.

This story is republished in TechTalks the blog that explores the role of technology in solving problems … and creating new problems. Like them on on Facebook Here and follow them here:

Leave a Reply

Your email address will not be published.