The US Air Force has a penchant for officer development, but the "general" he's currently working on does not have stars on his uniform: that's his general artificial intelligence (GAI) .
The term GAI refers to an artificial intelligence with a level of knowledge greater than or equal to that of man. Basically, when people argue that today's artificial intelligence is not a "true artificial intelligence," they confuse the terminology with that of GAI: machines that think.
Deep within the cavernous expanses of US Air Force research labs, a scientist named Paul Yaworsky works tirelessly to try to make intelligent American aircraft a real destruction. Or maybe he's trying to bring the coffee maker to life, we do not really know his final game.
What we know comes from a pre-published research paper that we found on ArXiv and was in the process of begging for a hyperbolic title. Maybe "the US Air Force is developing robots that can think and commit murder" or something like that.
In fact, Yaworksy's work seems to lay the foundation for a future approach to general intelligence in machines. It proposes a framework to bridge the gaps between the joint IA and the AGI.
According to the paper:
We correct this gap by developing a model of general intelligence. To do this, we focus on three basic aspects of intelligence. First, we must understand the general order and the nature of intelligence at a high level. Second, we need to know what these achievements mean with respect to the overall intelligence process. Third, we must describe these achievements as clearly as possible. We propose a hierarchical model to help capture and exploit order in intelligence.
At the risk of spoiling the end for you, this article offers a hierarchy for understanding intelligence – a roadmap for machine-learning developers to pin over their desks, if you will – but they do not contain any algorithm Transform your Google Assistant into Star Trek data.
What is interesting is that there are currently no accepted or understood routes leading to GAI. Yaworsky discusses this dissonance in his research:
The right questions may not have been asked yet. An underlying problem is that the intelligence process is not well enough understood to allow enough hardware or software models, to say the least.
In order to explain the intelligence in a way that is advantageous for artificial intelligence developers, Yaworsky breaks down this information into a hierarchical view. His work is early and the explanation of his research on high-level intelligence goes beyond the scope of this article (for a deeper dive: here is the white paper ), but it is also a good GAI trajectory as we have seen.
If we can understand the functioning of high-level human intelligence, we will contribute greatly to informing computer models for GAI.
And if you browse this article to find out if the US military is about to unwittingly lose an army of killer robots in the near future, here's a quote from the paper to dispel your worries:
What about the concerns of artificial intelligence that takes hold of the human? It is believed that artificial intelligence will one day become a very powerful technology. But as with any new technology or capability, problems tend to arise. Especially with regard to the general AI or artificial general intelligence (AGI), there is a huge potential, both for good and for bad.
We will not go into hype or speculation, but suffice it to say that many of the problems we hear about today with AI are due to the crude predictions involving the intelligence. Not only is it difficult to make good scientific predictions in general, but when the science in question involves intelligence itself, as in the case of AI, it is almost impossible to make predictions correctly. Again, the main reason is that we do not understand enough intelligence to allow accurate predictions. In any case, what we have to do with AI is to proceed with caution.