Neuroscience and AI: What to Expect in the Coming Years?

Posted by Peter Rudin on 7. March 2025 in Essay

Teaching Neuro-AI        Credit: allerin.com

Introduction

As a result of the huge research efforts directed to cracking the neural code, we are likely to witness the deployment of new AI-products and services within this decade. New concepts of personal assistants, augmenting the limits of our own intelligence, are just one example of what to expect. True human-like intelligence will become the new benchmark, disrupting AI-industries and applications where the focus on data alone has reached the saturation point. Economic concerns and the continuing demand for higher productivity will drive this change. In contrast, human concerns as we experience them today, with issues such as guaranteed income, distribution of wealth, ethics and government control, will intensify.

Today’s Limitations in Neuroscience

In his 2021 book The Self-Assembling Brain, Peter Robin Hiesinger, Professor of Neurobiology at the Free University of Berlin suggests that we should study how information encoded in our genome transforms the brain as we grow. The initial state of the brain provides no information as to how the end result will be. As the brain applies its inherent genetic algorithm, it develops new states which form the basis for the next states and so on. At each new stage of development our brain acquires new capabilities such as common sense, logic, language, problem-solving and planning and as we grow older, our capacity to learn changes. In their current form, computational Artificial Neural Networks (ANNs) suffer from serious drawbacks such as their need for very large, labelled training data or their lack of handling on-going changes in the environment to which they are exposed. Despite these shortcomings, ANNs have proven to be extremely efficient at specific tasks where the training data is available in enough quantity for solving a problem. We are, however, a long way off from achieving human-like intelligence and it is questionable whether the computational approach taken by current ANN models will ever get us there. Recent research has found that two-thirds of the brain’s activity is involved in simple eye movements. Hence, to understand the brain, we might need to revise our preconceptions as to how it works, possibly similar to the way quantum mechanics is challenging our understanding of physical phenomena.

Cracking the Neural Code

With advances in the last few years in both recording technology and the tools to analyse large-scale neural activity, researchers studying the human brain should soon be able to address some fundamental questions concerning neural coding and dynamics. Given its complexity, speech is an ideal substrate for studying neural dynamics. A monkey might learn to extend its arm in 100 different ways, but a person can say 10,000 words over weeks of recordings with no training. Researchers can explore how the brain encodes this vast space of behaviour and examine what neural activity looks like as people prepare to speak. Researchers at Stanford believe that cracking the brain’s ‘neural code’ could lead to AI surpassing human intelligence in capacity and speed. Eitan Michael Azoff, a specialist in AI analysis at Stanford, argues that humans are set to engineer superior intelligence with greater capacity and speed than our own brains. What will unlock this leap in capability is understanding how the human brain encodes sensory information, and how it moves information in the brain to perform cognitive tasks, such as thinking, learning, problem solving, internal visualisation, and internal dialogue. Current AI does not ‘think’ visually; it uses ‘large language models’ (LLMs). As visual thinking predates language in humans, Azoff suggests that understanding visual thinking and then modelling visual processing will be a crucial building block for human-level AI. Azoff says: “Once we crack the neural code we will engineer faster and superior brains with greater capacity, speed and supporting technology that will surpass the human brain. But Azoff issues a warning too, saying that society must act to control this technology and prevent its misuse: “Until we have more confidence in the machines we build we should ensure the following two points are always followed. “First, we must make sure humans have sole control of the off switch. Second, we must build AI systems with behaviour safety rules implanted.”

Our Brain teaches ANNs

According to an article published late January 2024 by Anthony Zador, Professor of Biology, the motivation for using neuroscience to improve AI is obvious. If the ultimate goal is, in the words of AI pioneer Marvin Minsky, “to build machines that can perform any task that a human can do,” then the most natural strategy is to reverse-engineer the brain. The motivation for using AI, in particular ANNs to model neuroscience is that they represent our best model of distributed brain-like computation. Indeed, these are the only models that can solve hard computational problems. In spite of remarkable progress over the past decade modern AI still lags far behind individuals on several tasks. AI systems can now write essays, pass the bar exam, ace advanced physics tests, prove mathematical theorems, write complex computer programs and flawlessly recognize speech. In many other domains including navigating the physical world, planning over multiple time scales and performing perceptual reasoning, AI is mediocre at best. AI systems struggle with activities that a child can manage easily, a discrepancy known as Moravec’s paradox: What we consider difficult such as high-level cognitive tasks, reasoning and solving math problems, turns out to be surprisingly easy for AI, and what we take for granted, our astounding ability to interact with the environment, remains out of reach for AI. Humans inherit evolutionary solutions efficiently encoded through a ‘genomic bottleneck’. The genome instructs the development of neural circuits that can instinctively perform complex tasks without the need for vast amounts of experience or data. Adapting this genomic bottleneck as an algorithm for ANNs could give these networks more efficient learning with far less training data and help uncover some of the fundamental constraints on neuronal development, such as the rules governing how neuronal circuits are specified in the genome. Similarly, ANNs might borrow ideas from the human brain to become more energy efficient. A real-time conversation with models such as ChatGPT requires at least 100 times more power than the entire human brain consumes. The energy used by AI is already enormous and growing rapidly, hence understanding and replicating the brain’s energy-efficient computational strategies would have important implications for technology.

Co-Evolution of Neuroscience and AI

ChatGPT wouldn’t exist without neuroscience, nor would the computer vision systems that help driverless cars cruise on highways without human intervention. All of these AI tools are grounded in computational structures called neural networks. According to Andreas Tolias, professor of ophthalmology at Stanford, “There is just no route between AI and helping solve diseases of the brain that do not depend on neuroscience.” He explained how neural networks can help uncover key principles of visual systems and how these systems work. There is an appealing symmetry to all of this work. The AI systems these scientists are using have deep roots in neuroscience. In contrast, Perceptrons, the biologically inspired building blocks of neural networks, have been around since the 1950s, and visual systems which inspired convolutional neural networks were first piloted in 1969. But whether and how current-day neuroscience will continue to influence AI development remains an open question. AI algorithms do not need to be modelled on the latest insights from neuroscience as artificial systems do not need to directly model biological intelligence to function well. Hence, both fields should be able to evolve independently. This argument has some major recent AI developments on its side. Transformer neural networks, which were introduced in 2017 and power large-language models like GPT, do not have any obvious resemblance to brain networks, although they do trace their genealogy to attention research in psychology.

Conclusion

In 1965 Herbert Simon predicted that “machines will be capable, within 20 years of doing any work a human can do.” Despite impressive progress, and the wildly optimistic predictions that we are on the brink of the AI singularity, a point at which AI surpasses human intelligence and evolves on its own, we are still far from that goal. But perhaps we have reached the beginning in our attempts to formulate a unified model that leads to a deeper understanding of brain computations and artificial systems capable of mimicking  intelligent human behaviour based on insights from neuroscience.

Leave a Reply

Your email address will not be published. Required fields are marked *