AI experts worry that the field is on the brink of a scenario similar to the bursting of the Internet bubble. This is what is called an AI winter. And, if that happens, it could leave a lot of researchers, investors, and entrepreneurs in the cold.
Such a scenario could occur for a number of reasons, and its effects could vary greatly depending on the mediocrity of investments in space. But before diving into all of this, it's important to understand that there's no official Bubble Czar that determines when it's time to head for lifeboats.
The problem with bubbles is that you can never tell when they are going to burst – or even if you are in one. But in retrospect, it's usually pretty easy to see why they happen. In this case, much like the dotcom, an AI bubble arrives because of excessive speculation.
Not only do venture capitalists (VCs) pitch money to those who mumble the words "neural" and "network" in the same sentence, but companies like Google and Microsoft rename ] themselves as AI-based companies.
Gartner's experts predict that "AI-derived activity" will be worth $ 3.2 trillion by 2022 – more than the film, video game and video industries and music combined. To put it simply, it's more than an accumulation of speculation.
To understand what would happen if such a giant bubble burst, we had to go back a little further than the bursting of the 2000 Internet bubble.
There was a winter IA – that's just another way to say bubble of AI – in the 1980s. Most of the breakthroughs we've seen in recent years, in areas like vision by computer and neural networks, were promised by researchers during the "golden years" of AI, from the mid-1950s to the late 1970s.
Today, researchers like Ian Goodfellow and Yann LeCun are pushing the limits when it comes to deep learning techniques. But much of what they do now and their colleagues continues to be a promising job for decades. Work abandoned because of a lack of interest on the part of researchers and investor financing.
And it's not just advanced researchers who need to worry. In fact, they can first be the safest. Dr. Fei Fei Li, chief Google Cloud researcher, will probably find work in every winters except the coldest AI, but the 2023 promotion may not be so lucky. In fact, researchers at the university could be the first to suffer – when AI funding dries up, it will likely affect Stanford's research department ahead of Microsoft's.
So, how do we know if an IA winter is coming? The short answer: we do not do it, so suck it up and sally-before. But the long answer is, we look at the factors that can cause one.
John Langford, a Microsoft researcher, argues for an imminent winter for AI through the following observations:
NIPS submissions are up 50% this year to 4,800 items.
There is significant evidence that the review process of articles in machine learning cracks under several years of exponential growth.
Public figures often outclass the state of AI.
Money is raining from the sky on ambitious startups with a good story.
Apparently, we even now have a fake conference website ( https://nips.cc/ is the real one for NIPS).
Some of them seem to be very large contracts – the adoption in NIPS submissions indicates a flood of research, it has been speculated that low quality research is starting to slip through the cracks, and there is a lot of rigamarole on the role that tech celebrities and journalists play in causing an AI winter through excessive hyperbole.
His fourth point, if I can publish it, is probably that a winter IA will be the direct result of investors slamming after they do not get instant satisfaction the most desire . Many of these investors are releasing millions of dollars on startups that seem redundant in every respect except the promises they make.
The fifth point is more like a personal complaint, it's unclear how a crappy scam affects the future of AI, but it is indicative that the NIPS conference is so popular that someone would try to rip off his attendees.
We are obviously not in a stable situation. Is it a bubble or a revolution? The answer surely includes a bit of revolution: fields of vision and speech recognition have been replaced by great empirical successes created by deep neural architectures and, more generally, machine learning has found many uses in the real world. At the same time, I find it hard to believe that we are not living in a bubble.
So maybe we're already in a bubble. What the heck are we supposed to do about it? According to Langford, everything is a question of damage control. He says some research is more "bubbly" than others and that researchers should focus on "creating intelligence" rather than "imitating intelligence".
But the ramifications, this time, may not be as severe as they were 40 years ago. It is safe to say that we have reached some kind of "save point" in the AI field. You could argue that some of the things promised by AI researchers could be far-fetched, such as general artificial intelligence, but most of the time machine learning has already provided solutions to unresolved problems.
I can not imagine that Google is abandoning the AI that powers its Translate application for example, unless something better than machine learning happens to accomplish the task. And there are countless other examples of powerful AI used around the world right now.
But, for venture capitalists and entrepreneurs, the best advice could always be: an ounce of valuation is worth a pound of speculation.