From AI to AGI: New Developments to Narrow the Gap?

Posted by Peter Rudin on 25. March 2022 in Essay

Microsoft and AGI    Picture Credit: newatlas.com

Introduction

Artificial Intelligence (AI) has contributed heavily to solving specific problems, but we are still far away from the kind of general-purpose AI – also referred to as ‘Artificial General Intelligence (AGI)’ – that scientists have been dreaming of for decades. The endeavour of creating AI with computational means began in the late 1950s, when a dozen scientists gathered at the Dartmouth College, to explore the application of computers for replicating human intelligence. The conference marked the official beginning of AI as a science. Often referred to as  ‘AI-Winter’, accompanied with massive cuts in funding – largely caused by overhyped expectations while underestimating the complexity of the problem to be solved – AI has now gone through several cycles of failures. That is why – despite six decades of research and development – we still do not have an AI that rivals the cognitive abilities of a human child, let alone one that can think like an adult.

Definition, Requirements and History of AGI

Defining AGI is not simple, but there are several characteristics that AGI should match, such as common sense, background knowledge, transfer learning, abstraction and causality. The early efforts to build AI were focused on creating rule-based systems, also known as symbolic AI, which assumed that the human mind, in order to comprehend its environment, manipulates symbols. We have mental representations for objects, persons, concepts, actions and behaviour while we use these symbols to process the information we receive through our senses, to reason about the world around us, develop intentions and to make decisions. But symbolic AI has some fundamental flaws. It only works as long as you can encode the logic of a task into rules. But manually creating rules for every aspect of intelligence is virtually impossible. In dealing with this problem, another line of research – focused on the application of machine learning – has evolved, building AI systems that learn from experience. There are numerous types of machine learning concepts and its associated algorithms, but they all have a similar core logic: One creates a model of an artificial neural network, tunes its parameters by providing training examples and then uses the trained model to generate new data to best match the input defined by the original model.  Currently so called  ‘Deep Learning’ represents the most powerful tool for machine learning applications which – as it is based on very large (deep) artificial neural networks – can solve problems that were previously considered unsolvable. In recent years, deep learning has heavily contributed to advancing computer vision, speech recognition, and natural language processing to new levels of performance. However, these applications, as useful as they might be, still lack the fundamental human capacity to include meaning and causality for solving a problem. A huge language model might be able to generate a coherent text or translate a paragraph from French to English, but it does not understand the meaning of the words and sentences it creates. What it is basically doing is predicting the next word in a sequence based on statistics it has derived from millions of text documents. While this method has produced impressive results, it also has its limits when it comes to dealing with things that are not represented in the statistical regularities of words and sentences. Moreover, without any kind of symbol manipulation, neural networks perform very poorly at many problems that symbolic AI programs can solve easily, such as counting items or dealing with negation. In a nutshell, symbolic AI and machine learning replicate separate components of human intelligence, but as separate entities they are unable to create AGI.

New Approaches for Achieving AGI

Considering the enormous growth of publications covering issues in neuroscience, it is obvious that the exploration of the functionality of the real human brain and the ever better technical means to measure and analyse behavioural data with deep artificial neural networks, contributes significantly to closing the gap between AI and AGI. According to the latest 2022 Stanford AI Index report, the total number of all AI publications doubled from 162’000 in 2010 to 324’00 in 2021 while the number of AI patents filed in 2021 is more than 30 times higher than in 2015, showing a compound annual growth rate of 76.9%. One of the solutions being explored to overcome the limits of AI is based on the concepts of neuro-symbolic systems. In a talk at the IBM Neuro-Symbolic AI Workshop Day 2: Session 2 – Insight (ibm.com), Joshua Tenenbaum, professor of computational cognitive science at the Massachusetts Institute of Technology, explained how neuro-symbolic systems can help to address some of the key problems of current AI systems. Among the many gaps in AI, Tenenbaum is focused on one in particular: “How do we go beyond the idea of intelligence as recognizing patterns in data and more toward the idea of all the things the human mind does when we are modelling the world, explaining and understanding the things we are seeing?” Admittedly, that is a big gap, but bridging it starts with exploring one of the fundamental aspects of intelligence that humans and many animals share: intuitive physics and psychology. Our minds are built not just to see patterns in pixels and soundwaves but to observe and understand the world through models. As humans, we start developing these models as early as three months of age. Multiple studies conducted by researchers Felix Warneken and Michael Tomasello from the Social Minds Lab at the University of Michigan show that children develop abstract ideas about the physical world and apply them in novel situations. These capabilities are at the heart of common sense, and they are developed quite early in children’s brain structure, according to Tenenbaum.

Intuition in Physics and Psychology

Tenenbaum lists three components required to create the core for intuitive physics and psychology by defining a three-way interaction between symbolic, probabilistic and neural AI. “We think that this three-way combination is needed to capture human-like intelligence and core common sense”, Tenenbaum says. The symbolic component is used to reason with abstract knowledge; The probabilistic inference model helps establish causal relationships between different entities and deal with uncertainty; The neural component uses pattern recognition to map real-world sensory data to knowledge and to help navigate search spaces. One of the key components in Tenenbaum’s neuro-symbolic AI concept is a physics simulator that helps predict the outcome of actions. This is similar to how the human mind works as well. When we look at an image, such as a stack of blocks, we will have a rough idea of whether it will resist gravity or topple over. If we see a set of blocks on a table and are asked what will happen when we give the table a sudden bump, we can roughly predict which blocks will fall. We might not be able to predict the exact trajectory of each object, but we develop a high-level idea of the outcome. Unfortunately, intuitive physics and psychology are not yet applied in today’s natural language processing systems, considering that they represent one of the hottest topics in AI. In his talk Tenenbaum explained that language is deeply grounded in the unspoken common-sense knowledge that we acquire before we learn to speak. In a paper titled “The Child as a Hacker”, Tenenbaum and his co-authors use programming as an example of how humans explore solutions to a variety of previously defined problems. The authors also discuss how humans gather bits of information, develop them into new symbols and concepts, and then learn to combine them together to form new concepts. “We want to provide a roadmap of how to achieve the vision of thinking and what is it that makes human common sense distinctive and powerful”, Tenenbaum says. This represents one of AI’s oldest dreams also expressed by Alan Turing’s vision: “To achieve true intelligence, you should design a machine that is like a child because the real secret to human intelligence is our ability to learn.”

Conclusion

Scientists and experts are divided on the question of how many years it will take to reach AGI. But most agree that we are at least two to three decades away. Some scientists – with Tenenbaum as one example – believe that the path forward is hybrid artificial intelligence, a combination of neural networks and rule-based systems. The hybrid approach, they believe, will bring together the strength of both approaches and help overcome their shortcomings. Other scientists believe that neural network models will eventually reach the reasoning capabilities they currently lack. Many researchers are engaged in the design of deep learning systems that can perform high-level symbol manipulation without the explicit instruction of human developers. Other researchers  work in the area of self-supervised learning, a branch of deep learning that manages the experience and reasoning about the world in the same way children do. Only time will tell which approach works best. However, it seems fair to say that the current gap between AI and AGI is indeed shrinking.

Leave a Reply

Your email address will not be published. Required fields are marked *