Neuroscience + Language: a Strong Alliance to Enhance GPT-4

Posted by Peter Rudin on 2. June 2023 in Essay

The need for Language at Babel    Source:wikiart.org

Introduction 

In a recent essay, Evolution is making us treat AI like a human, and we need to kick the habit (theconversation.com), the US psychologist Gary Marcus recommends that we stop treating AI models like people. By AI models he refers to large language models (LLMs) such as ChatGPT and its successor GPT-4. Marcus may be correct to make such a controversial statement. However, many of us will find it difficult to near impossible to accept such a theory. The reason is that LLMs are designed to interact with us from a human point of view. That LLMs can mimic human conversation so convincingly originates from a profound insight by computing pioneer Alan Turing. He realised that it is not necessary for a computer to understand an algorithm when being processed. This means that GPT-4 can produce paragraphs with language yet does not understand any word in any sentence it generates. The developers of LLMs successfully converted the discipline of semantics – arranging  words to create meaning – to tasks provided by statistical analysis with words being matched based on their frequency of prior use. Turing’s insight echoes Darwin’s theory of evolution, which explains how species adapt to their surroundings without understanding a thing about their environment. The developers of today’s LLMs unconsciously adopt all our cognitive capacities and emotions without regard for the intrinsic power of language. Hence, to enhance AI beyond GPT-4 one needs to find a different approach.

The Brain and the Evolution of Language

According to Wikipedia Language is a structured system of communication that consists of grammar and vocabulary. It is the primary means by which humans convey meaning, both in spoken and written forms. According to research conducted by neuroscientists, language is generated in several regions of the human brain. Humans acquire language through social interaction in early childhood with the result that children speak fluently when they are about three years old. Language and culture are co-dependent and in addition to its communicative purpose, language has social implications such as building group identity or defining social behaviour. One definition sees language primarily as the mental faculty that allows humans to learn languages to generate and understand  thought. This definition stresses the universality of language for all humans and it emphasizes the biological basis of language as a unique development of the human brain. Professor Noam Chomsky, the main proponent of this theory and sometimes referred to as ‘the father of modern linguistics’,  has also pioneered the generative theory of grammar that defines language as the construction of sentences that can be generated using transformational grammars. He sees language mostly as an innate faculty that is largely genetically encoded, whereas functional theorists see it as a system that is largely cultural and learned through social interaction. Chomsky suggests that language was invented only once, and that all modern spoken languages are thus in some way related to each other. He proposes that some random mutation must have occurred which reorganized the brain, implanting a language organ in the primate’s brain. Though cautioning against taking this story literally, Chomsky insists that “it may be closer to reality than many other fairy tales that are told about evolutionary processes that define language”.

The Impact of Computational Neuroscience

According to an article  Uncovering the Mystery of the Human Brain with Computational Neuroscience (news-medical.net), the human brain can be compared to a powerful supercomputer. How it really functions remains one of the ultimate mysteries of our time. Scientists engaged in computational neuroscience seek to unravel this mystery, using simulations and mathematical models to develop an insight into the brain’s functionality. The first research program titled the ‘Computational and Neural Systems (CNS)’ was initiated at the California Institute of Technology in 1985. The program was based on two widely accepted research methods: First Neurophysiology defining a mathematical model describing the underlying mechanism of how neuronal action potentials are initiated and propagated; Second Experimental Psychology and Neuroscience applying information processing and learning based on artificial neural networks (ANNs) and deep learning algorithms that represent the core of today’s AI. The interdisciplinary field of computational neuroscience draws upon approaches from electrical engineering, computer science, physics, mathematics, neuroanatomy as well as neurophysiology and experimental psychology. This effort includes scientific research to develop applications based on Neuromorphic Technology to mimic the human brain. In addition, these efforts are enhanced by international research organisations to standardize and cooperate on data-definitions, describing the functionality of different brain regions. The latest version of a neuromorphic computer called the SpiNNaker, developed at the University of Manchester, Research Groups: APT – Advanced Processor Technologies (School of Computer Science – The University of Manchester), is able to mimic a network of brain regions in real time. The main mission and purpose of the project is to support neuroscientists to unravel the mystery of the mind. The model combines high-throughput machine learning technology with the processing of brain-sensing data at millisecond latency and aims to close the gap between brain modelling and computational AI.

AI’s next Frontier

According to AI’s Next Frontier: Brains on Demand | Future , Artificial Intelligence (AI) pioneers like Marvin Minsky considered  the functionality of the brain as inspiration to design intelligent machines. In a surprising reversal, AI is now helping to understand the human brain as the source of this inspiration. This approach of using AI to build models of the brain is also referred to as ‘NeuroAI’. Based on ongoing research efforts one can conclude that neuroscience will provide ever more precise models for simulating the brain, especially models related to our most prominent senses: vision and hearing. As result, we will be able to download and use these sensory models with the same convenience currently available for object recognition or natural language processing. NeuroAI is an emerging discipline that seeks to study the brain to learn how to build better AI-machines and vice versa use these machines to better understand the brain. One of the core tools of NeuroAI is the application of Artificial Neural Networks (ANNs) to create computer models that define specific brain functions. This approach was kickstarted in 2014 when researchers at MIT and Columbia University showed that deep ANNs could explain responses in a brain region for object recognition defined as the ‘inferotemporal cortex’. They introduced a basic method of comparing an ANN to the human brain. Using this method and repeating testing across different brain regions, scientists have succeeded to define various computer models that explain brain functionality. Since its inception in 2014, the researchers have followed the same basic method: 1. Train artificial neural networks to solve a task, for example for object recognition. The resulting network is called ‘task optimized’; 2. Compare the results of this trained artificial neural network with real brain recordings using statistical techniques such as linear regression or representational similarity analysis; 3. Pick the best performing model as the optimum solution for the brain region examined and repeat the testing.

This method can be applied with data collected inside the brain or from non-invasive techniques such as Magneto-Encephalography (MEG) or functional Magnetic Resonance Imaging (fMRI). A NeuroAI model of the brain has two key features. First it is computable as one can feed the computer model a stimulus which produces a response that shows how a brain area will react. Second it is a deep neural net that can be optimized in the same way that one optimizes models which simulate visual recognition and natural language processing. As a result, neuroscientists have access to a new generation of powerful tools to study the functionality of the human brain and the generation of language.

Conclusion

Ultimately, the goal of language science is to understand the representations and mechanisms that allow people to learn and use language to communicate with each other. However, ethical issues spurred by the growing wave of fake-conversations call for regulations which so far have failed to make any impact. Language – next to vision and touch – is human’s most important asset. LLMs provide useful tools and the competition to control the market is in full swing. Yet, these tools do not cover the deep-rooted power of language. The philosopher Daniel Dennett once said that competence without comprehension is useless. This view might well be the best remedy for our innate compulsion to treat AI like humans.

Leave a Reply

Your email address will not be published. Required fields are marked *