Realigning AI Research to Achieve Artificial General Intelligence (AGI)

Posted by Peter Rudin on 12. February 2021 in Essay

The Alchemist discovers Phosphorus. Painting by Joseph Wright, 18th century. 

Introduction

The project of creating human-like artificial intelligence (AI) began after World War II, when it was discovered that electronic computers were not just number-crunching machines but could also manipulate symbols. Largely due to the invention of Artificial Neural Networks (ANNs)  and their knowledge-generation capacity in solving specific problems, the so-called Narrow Version of AI became one  of the most important technology drivers in recent history. Meanwhile a community of researchers emerged, focusing on the original ambitious goals of AI – the creation of software- and/or hardware-systems with general intelligence comparable to, and ultimately perhaps superior to, that of human beings. Usually referred to as ‘Artificial General Intelligence’ (AGI), it appears that new research concepts are required to reach this ‘Holy Grail’ of AI. In recent years many advances have originated from using deep neural networks trained in tasks such as object recognition, language generation and board games, achieving performance levels that equal or even exceed those of humans. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways.

Why is AGI important? 

To understand the importance and complexity of achieving true artificial human intelligence, it is worthwhile looking at some of the capabilities that AGI will need to master:

Sensory perception: Whereas deep learning has enabled major advances in computer vision, AI systems are far away from developing human-like sensory-perception capabilities. For example, systems trained through deep learning still have poor colour consistency: self-driving car systems have been fooled by small pieces of black tape or stickers on a red stop sign. For any human, the redness of the stop sign is still completely evident, but the deep learning–based system gets fooled into thinking the stop sign is something else. Today’s AI systems are not yet able to replicate this distinctly human perception capability.

Natural language understanding: Humans record and transmit skills and knowledge through books, articles, blog posts, and, more recently, videos. AGI must be able to consume these sources of information with full comprehension. Humans write with an implicit assumption of the reader’s general knowledge, and a vast amount of information is assumed and unsaid. If AI lacks this basis of common-sense knowledge, it will not be able to operate in the real world. The various reports of AI passing entrance exams or doing well at eighth-grade science tests are a few examples of how a narrow AI solution can be easily confused with human-level intelligence.

Creativity: There is a widespread fear that the moment AI reaches human-level intelligence, it will rapidly improve itself through a bootstrapping process to reach levels of intelligence far exceeding those of any human. But in order to accomplish this self-improvement, AI systems will have to rewrite their own code. This level of introspection will require an AI system capable of understanding the vast amounts of code that humans put together and identify novel methods for improving it. Machines have demonstrated the ability to draw pictures and compose music; However, further advances in human-level creativity are needed for generating code and optimizing man-machine interaction.

Social and emotional engagement: For robots and AI to be successful, humans must want to interact with them and not to fear them. The robot will need to understand humans, interpreting facial expressions or changes in tone that reveal underlying emotions. Certain limited applications are in use already, such as systems that can detect when customers sound angry or worried in order to direct them to the right queue for help. But given humans’ own difficulties in interpreting emotions correctly, an AI that is capable of empathy appears to be a must for AGI.

AGI and the Problem of Causality

In  ‘The Book of Why, The New Science of Cause and Effect,’, published in 2018, Judea Pearl, Professor of computer science and statistics at UCLA, argues that to create humanlike intelligence in a computer, the computer must be able to master causality. Current machine learning systems operate, almost exclusively, in a statistical or model-blind mode, which imposes severe theoretical limits on their power and performance. Such systems cannot reason about interventions and retrospection and, therefore, cannot serve as the basis for AGI. To achieve human level intelligence, learning machines need the guidance of a model of reality, similar to the ones used in causal inference. We are smarter than our data as data does not understand causes and effects; humans do. The human brain is the most advanced tool ever devised for managing causes and effects. Our brains store an incredible amount of causal knowledge which, supplemented by data, could be harnessed to answer some of the most pressing questions of our time. More ambitiously, once we really understand the logic behind causal thinking, we could emulate it on modern computers and create an “artificial scientist.” This smart robot discovers yet unknown phenomena, finds explanations to pending scientific dilemmas, designs new experiments and continually extracts more causal knowledge from the environment. Gaining an understanding of cause-effect relations is an ability at which humans clearly and strikingly outperform any other species. To a great extent, this is due to the fact that individuals are just not reliant on drawing inferences from observed statistical regularities, but are willing and able to share their observations, inferences, and interpretations, to accumulate them over time, and to transmit them to the next generation. The content, which is so crucial in human causal cognition, is a product of culture from the very beginning, rendered possible and profoundly shaped by the fact that humans are a cultural species.

Linking Brain Activity with Behaviour

Brain processes underly not only simple motoric behaviours such as walking and eating but also complex cognitive acts and behaviour that we regard as basically human: thinking, speaking and creating works of art. Current neuroimaging techniques reveal both form and function. They reveal the brain’s anatomy, including the integrity of brain structures and their interconnections. Moreover, they elucidate its chemistry, physiology and electrical and metabolic activity. The latest brain-analysis tools show how different regions of the brain connect and communicate. They can even show with split-second timing the sequence of events during a specific process, such as reading or remembering. Neuropsychologists employ these tools to correlate brain activity with behaviour, for instance by  capturing the psychological and neural processes involved in emotion, pain, self-regulation, self-perception and perception of others. Neuroimaging technology is also helping us to understand how the brain develops from infancy through to adulthood. Developmental neuroscientists study the neurobiological underpinnings of cognitive development. Combining functional measures of brain activity with behavioural measures, they explore how subtle early insults to the nervous system affect cognitive and emotional function later in life – for example, the effects of maternal illness or early childhood neglect on learning, memory and attention. Imaging tools can pay off in the classroom, too: Using such tools, literacy experts have shown that a year of intensive, methodical reading instruction enables the brains of underdeveloped children to look and function like those of more skilled young readers.

Design Principles of AGI-Systems

Current AI-Systems apply ANNs for solving  problems focused on large pools of classified data. ANNs simulate human brain activity with artificial neurons arranged in layers and parametrized nodes (‘weights’), with variable connections to represent the functionality of the trillions of biological synapses as part of the human brain. Similar to a human brain, ANNs need to be trained to solve specific problems. Although the exponential growth of computer hardware performance has fostered the tendency to build bigger and bigger ANNs as exemplified by GPT3 – an ANN consisting of 175 billion parameters by indexing the content of the entire World-Wide-Web –  the fact remains that causality and semantic understanding cannot be simulated. In contrast future AGI-systems should:

  1. a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems
  2. b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and
  3. c) harness compositionality and learning-to-learn methodology to rapidly acquire and generalize knowledge to new tasks and situations.

Learning is the process of model-building. Cognition is about using these models to understand the world, to explain what we see, to imagine what could have happened and then planning actions to reach a desired state. The difference between pattern recognition and model-building, between prediction and explanation, is central to human intelligence. Just as scientists seek to explain nature, not simply predict it, human thought is a model-building activity.

Realigning AI Research and Development

While ANNs continue to be improved with new algorithmic concepts especially in the field of Reinforcement-Learning, there is also a trend to provide software tools to apply AI for specific problem-solving applications without intrinsic machine learning knowledge as a prerequisite.

Hence the focus of AGI research is to integrate expertise in Neuroinformatics with expertise in Neuropsychology and Neuroscience. In summary  AGI is the result of a transdisciplinary, integrated effort based on three pillars of expertise:

  1. The functionality of the human brain (Neuroscience) and the formation of consciousness
  • Learning and Memory
  • Mind-Body interaction
  • Brain’s biological architecture
  1. Human Behaviour Analysis (Neuropsychology)
  • Cognition vs. Emotions
  • Human tasks and the outside world, society
  • Personality and free will, decision making
  1. Algorithmic Modelling of Brain Activity (Neuroinformatics)
  • Neural Network Models, ANNs
  • Mathematical, computational representation of intelligence
  • Deep Learning and artificial knowledge generation

With these disciplines combined, problem-solving and decision-making will be applicable to a much wider spectrum than presently available with Narrow-AI. However, the provisioning of ethical  guidelines  will become even more important and represents most likely the biggest challenge in advancing AGI to a socioeconomic asset.

Conclusion

High-Performance teams, representing these three pillars of expertise, are a prerequisite for advancing AGI research. Collaboration between Neuroinformatics, Neuroscience and Neurophilosophy is fundamental in overcoming the ongoing expansion of knowledge-silos we can observe in traditional university research settings. Effective teamwork implies synergism between all team members who are willing to combine and recombine their expertise in order to achieve a common goal. High-Performance teams generate innovation because they bring together far more concepts and bodies of knowledge than any one individual can.

Leave a Reply

Your email address will not be published. Required fields are marked *