My website says that I’m interested in Artificial General Intelligence (AGI). But what does this term actually refer to? There are a bunch of buzz words out there like “Artificial Intelligence”, “Machine Learning”, “Deep Learning”, “strong AI”, “weak AI”, and so on. “Artificial General Intelligence” is one of them (although probably not the most prominent one).
So let’s disentangle this a little bit:
- Artificial Intelligence (AI) is one of the more general terms for the subfield of computer science that tries to create intelligent programs – programs that can be thought to exhibit some type of intelligence. This is how the field was called when it started back in the 50s. Back then, most approaches were based on logical rules. The idea was that by manually writing a large enough amount of clever rules and having a system capable of applying these rules, intelligence could be simulated to a sufficient degree. As it turned out, this was not viable for many applications. Nowadays, the term “Good old-fashioned AI” (GOFAI) refers to this kind of approach.
- Machine Learning (ML) is a bit more recent than AI. Here the main idea is that the rules should not be hand-coded by humans, but should be discovered by the computer itself. In machine learning, it is usually assumed that there is a data set consisting of a bunch of so called “training examples” and that the goal of the machine learning algorithm is to infer some rules bases on these examples. For instance, the data set might contain two types of images: images of cats and images of dogs. The machine learning algorithm would then try to find some way to decide whether a given image contains a cat or whether it contains a dog.
- Deep Learning (DL) is one specific type of machine learning algorithms. They are based on simplified models of human neurons and are thus a special type of “artificial neural networks” (ANN). This approach is called deep learning, because it is based on a relatively large number of these neurons in a structure based on a large number of layers. Many state of the art results in recent years have been achieved by using deep learning techniques (e.g., Google DeepMind’s AlphaGo – the first AI to beat a world-class Go player).
- Strong AI is one of the two main interpretations of AI research. Strong AI says that the goal of AI research is to create a thinking machine – an artificial system that is as intelligent as a human being and that can solve in principle any task that a human can do. Strong AI basically argues that it is possible to create a machine equivalent of the human mind.
- Weak AI on the other hand is the other interpretation of AI research. It assumes that the goal of strong AI is not achievable (at least not within the near future) and puts its focus more on specific problems. Weak AI is mainly concerned with expert systems: systems are specialized in solving one particular task very well (often much better than humans). This is what most current AI systems do: they excel at one particular task (e.g. playing chess) but they are completely incapable of doing anything else (e.g. distinguishing cat pictures from dog pictures).
Now where does AGI fit in?
In my opinion, AGI is largely synonymous with strong AI: it is about artificial general intelligence, i.e., an AI system that is not confined to a small set of tasks, but that can exhibit generally intelligent behavior over a wide variety of tasks. An AGI could in principle learn any task that we would give to it – a single system would be able to organize our appointments, play chess against us, drive our car, summarize the news for us, help us with our personal finances, and many things more. A personal assistant like Apple’s Siri, but as smart and capable as a human being.
But why do we need a separate term for that? Wouldn’t it be enough to use the already existing term of strong AI?
Well, although strong AI was the original goal of the AI research community, people realized quite fast that it is not as easy to obtain strong AI as they initially thought. Over time, the focus of the field shifted from creating a generally intelligent agent to solving specific problems using smart algorithms. Nowadays, when people say “AI”, they usually refer to weak AI. The term AGI was introduced in the early 2000s. Although content-wise it is basically “back to the roots of AI”, this new name indicates a new beginning – a new attempt to build generally intelligent systems. The “G” in the abbreviation highlights the difference between AGI research and “ordinary” AI research (which has largely become synonymous with weak AI).
I know that this only gives a rough idea about the meaning behind AGI, but I will explore this topic in more detail in future blog posts (where I’ll also share some links and references). So stay tuned!