The Myth and Reality of Artificial General Intelligence (AGI)

Posted by ge-sehen on 15. December 2023 in Essay

Myths about AGI   Credit: futureoflife.org

Introduction

The idea of creating an artificial mind that can rival or exceed human intelligence persisted for a long time. Some of the earliest examples can be found in ancient myths and legends, such as the golems of Jewish folklore, the automata of Greek mythology or the mechanical men of Hindu epics. In modern times, the concept of Artificial General Intelligence (AGI) has been influenced by various thinkers and writers, such as Alan Turing, John von Neumann, Isaac Asimov, Arthur C. Clarke, Ray Kurzweil, Nick Bostrom and many others. Artificial Intelligence (AI) has contributed heavily to solving specific problems, but we are still far away from AGI, considered the ‘Holy Grail of AI’ that scientists have been dreaming of for decades.

Definition, Requirements and History of AGI

Defining AGI is not simple, but there are several characteristics that AGI should provide, such as common sense, background knowledge, transfer learning, abstraction and causality. The early efforts to build AI were focused on creating rule-based systems, also known as symbolic AI. But symbolic AI has some fundamental flaws. It only works if you can encode the logic of a task into rules. However, manually creating rules for every aspect of intelligence is virtually impossible. In dealing with this problem, another line of research – focused on the application of machine learning – has evolved, building AI systems that learn from experience. There are numerous types of machine learning concepts and its associated algorithms, but they all have a similar core logic: One creates a model of an artificial neural network, tunes its parameters by providing training examples and then uses the trained model to generate new data to best match the input defined by the original model.  Currently so called  ‘Deep Learning’ represents the most powerful tool for machine learning applications which – as it is based on very large (deep) artificial neural networks – can solve problems that were previously considered unsolvable. In recent years, deep learning has heavily contributed to advance computer vision, speech recognition, and natural language processing to new levels of performance. However, these applications, as useful as they might be, still lack the fundamental human capacity to include meaning and causality for solving a problem. A huge language model might be able to generate a coherent text or translate a paragraph from French to English, but it does not understand the meaning of the words and sentences it creates. What it is basically doing is predicting the next word in a sequence based on statistics it has derived from millions of text documents. In a nutshell, symbolic AI and machine learning replicate separate components of human intelligence, but as separate entities they are unable to create AGI.

Achieving AGI with Embodiment?

Embodiment begins at birth. For the new-born infant, even the simplest act of recognition of an object can be understood only in terms of bodily activity. Embodied cognition can give us an explanation regarding the process through which infants attain spatial knowledge and understanding. For example, infants explore whatever is in their vicinity by seeing, tasting or touching it before learning to reach objects nearby. Then, infants learn to crawl which enables them to seek out objects beyond reaching distance, but also learn about basic spatial relations between themselves including the basic understanding of depth and distance. Hence, through exploration, infants get to know the nature of the physical and social world around them. Although adults have more information of experience to draw on, the fundamental process of learning at any age is based on creating and elaborating networks of neural associations. Our subjective experience of thinking is something we can articulate that all thought originates in the embodied brain’s activity. Because these associations are the brain’s primary references, everything we come to know and understand, including the most abstract concepts, originate with embodiment. Research shows that embodiment and learning correlate in achieving cognitive intelligence. Going one step further, the hypothesis can be applied  that the promise of awards will enhance this learning process. Trial-and-error experiences are enough to develop behavior that exhibits the kind of abilities associated with intelligence. From this one can conclude that reinforcement learning, a concept of AI that is based on reward maximization, can lead to the development of AGI.

Intuitive Physics and Psychology to Create AGI?

Joshua Tenenbaum, Professor of Computational Cognitive Science at the Massachusetts Institute of Technology, focuses his research to explain how neuro-symbolic systems can help to address some of the key problems of current AI systems. Among the many gaps in AI, Tenenbaum is concentrating on one in particular: “How do we go beyond the idea of intelligence as recognizing patterns in data and more toward the idea of all the things the human mind does when we are modelling the world, explaining and understanding the things we are seeing?”. Intuitive Physics and Behavioral Psychology can be applied, bridging this gap by  exploring one of the fundamental aspects of intelligence that humans and many animals resort to.  Our minds are built not just to see patterns in pixels and soundwaves but to observe and understand the world. Language is deeply grounded in the unspoken common-sense knowledge that we acquire before we learn to speak.  One of the key components in Tenenbaum’s neuro-symbolic AI concept is a physics simulator that helps predict the outcome of actions. When we look at an image, such as a stack of blocks, we will have a rough idea of whether it will resist gravity or topple over. If we see a set of blocks on a table and are asked what will happen when we give the table a sudden bump, we can roughly predict which blocks will fall. We might not be able to predict the exact trajectory of each object, but we develop a high-level idea of the outcome. “We want to provide a roadmap of how to achieve the vision of thinking and what it is that makes human common sense distinctive and powerful”, Tenenbaum says. This is in line with one of AI’s oldest dreams also expressed by Alan Turing’s vision: “To achieve true intelligence, you should design a machine that is like a child because the real secret to human intelligence is our ability to learn.”

The Future of AGI

The reality of AGI is much more uncertain and nuanced than the myth. There is no consensus among researchers and experts on whether AGI is possible or desirable and when it might be achieved or how it might behave. There are many open questions and challenges that need to be addressed before AGI can become a reality. Some of these are:

How do we define and measure intelligence? Is there a universal standard or metric for comparing different forms and levels of intelligence? How do we account for the diversity and variability of intelligence across different domains and contexts?

How do we model and implement intelligence? What are the essential components and mechanisms of intelligence? How do we integrate different aspects of intelligence, such as perception, cognition, emotion, motivation, communication, creativity, etc.? How do we balance between generality and specificity?

How do we align and regulate intelligence? How do we ensure that intelligent systems share our goals and values? How do we prevent or mitigate unwanted or harmful behaviors? How do we monitor and control their actions and outcomes?

These questions are not only technical but also ethical, social, and philosophical. They require interdisciplinary collaboration and public engagement to find satisfactory answers. They also require constant reflection and revision to adapt to a rapidly changing environment of AI-technology.

Conclusion

AGI is a fascinating and controversial topic. However, there is a gap between the myth and the reality of AGI. The myth of AGI is based on several assumptions and arguments that may not hold true. However, the reality of AGI is much more complex and compared to its myth loaded  with  questions and challenges that need to be addressed before AGI can become a reality. The future of AGI is hard to predict, but it is likely to have profound implications for humanity and society. Depending on how we approach and manage the development and deployment of AGI, it could be a source of great benefit or great harm, or both.

Leave a Reply

Your email address will not be published. Required fields are marked *