Innovation Credit: www.devprojournal.com
Today, the most powerful artificial intelligence systems employ a subset of machine learning called deep learning. Their algorithms learn by processing massive amounts of data through hidden layers of interconnected nodes, referred to as deep Artificial Neural Networks (ANNs). As their name suggests, deep neural networks were originally inspired by biological networks of the human brain. Since 1950, when the first ANN model called the perceptron was defined, AI has made huge progress in areas such as image recognition or natural language processing. With the ongoing progress in Neuroscience and brain research, the question looms if ANNs basic models and its algorithms need not to be adjusted in light of new knowledge about our brain’s structure and functionality.
The term ‘AI’ is often misused or overhyped. Data is clearly driving today’s practical use of AI. Quality, accuracy and potentially built-in biases very much determine the value and correctness of the outcome, stipulating the principle that ‘garbage-in’ will inevitably produce ‘garbage out’. Following a bottom-up approach, AI can be summarized as follows:
Data analytics covers everything from tracking user-data on websites to monitoring user reaction to discount offers from on-line retailers. The goal is to search for a pattern and find relationships between variables. Data analysis is descriptive since it is based on past events. It does not predict the impact of a change in a variable. Analytics are used in organisations for making better and informed decisions and by scientists for verifying or disproving theories and scientific models.
Data analytics leads to predictive analytics using collected data to predict what might happen in the future. Assumptions drawn from past experiences presuppose that the future will follow the same patterns. These ‘What/If’ assumptions are enhanced by human understanding of the past and present. Predictive insights derived from data analytics are extremely useful to marketers. They can help to predict the effectiveness of a campaign introducing a new product or to assess its impact on an election.
Machine learning (ML) builds on the concepts of predictive analytics, with the key difference that an ML-system is able to make assumptions, test and learn autonomously. Machine learning is a generic term for the ‘artificial’ generation of knowledge from experience with algorithms computing a statistical model based on the data provided. Over the years many subcategories of ML have evolved such as supervised and unsupervised learning, reinforcement learning or deep learning.
Deep Learning is a combination of Artificial Neural Networks (ANNs) and machine learning. An ANN is a mathematical representation of nodes called artificial neurons, which models the synaptic network of our biological brain. Simulating the brain’s biological process, artificial neurons receive computer generated signals which are processed and transmitted to other connected neurons. The connections are called edges. Neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal. This process is repeated until the output generated by the computer matches the input. For example, in image recognition, ANNs learn to identify cat images from training data that has been previously labelled to represent a cat.
Artificial General Intelligence (AGI)
AGI, also defined as human-level intelligence, enables the machines to think, learn, and find solutions to problems just like human brains do. AGI possesses the computational power to rationalize like us and to take actions, for example employing robots to accomplish a predefined task. The term ‘Narrow AGI’ defines highly specialised applications like autonomous driving or optimizing the design of mechanical structures. A growing community of AI researchers expresses doubts, however, that AGI – based on today’s computational ANN models – can ever be achieved beyond the current level of specialized, non-transferable application domains.
The Brain does not Think the Way we Think it does
According to Lisa Feldman Barrett, a highly respected psychologist at Northeastern University, neuroscientists tend to research the brain as ‘cartographers’, mapping the brain’s diverse physical domains and searching for boundaries between thinking, feeling, deciding, remembering, moving and other everyday experiences. In Barret’s opinion this leads to false conclusions. In a recent article Mental Phenomena Don’t Map Into the Brain as Expected | Quanta Magazine , Barrett and other scientists argue that for us to truly understand how the brain works, current concepts may need to be revised, perhaps radically. When functional magnetic resonance imaging (fMRI) or invasive brain-computer-interfaces (BCIs) made it possible to examine living brains, neuroscientists made great strides in understanding the neural foundations of perception, attention, learning, memory, decision-making, motor control and other classic categories of mental activity. But they also found unsettling evidence that those categories and the biological neural networks that support them do not work as expected. Recent research has found that two-thirds of the brain’s activity is involved in simple eye movements. In 2019, several teams of scientists found that most of the neural activity in ‘perception’ areas of the brain, such as the visual cortex, was encoding information about movements rather than sensory inputs. These findings are not just limited to neural centres of perception or other cognitive functions. The cerebellum region of the brain, for example, was thought to be dedicated almost exclusively to motor control, but scientists have found that it is also instrumental in attention processes, the regulation of emotions, language processing and decision-making. Hence, to understand the brain, we might need to revise our preconceptions about how it works, possibly the same way that quantum mechanics is challenging our understanding of physical phenomena.
The Genetic Encoding of our Brain
In his 2021 book The Self-Assembling Brain, Peter Robin Hiesinger, Professor of Neurobiology at the Free University of Berlin suggests that we should study how information encoded in our genome transforms the brain as we grow. The initial state of the brain provides no information as to how the end result will look like. As the brain applies its inherent genetic algorithm, it develops new states that form the basis for the next states and so on. Hence, our genome contains the information required to create our brain as it develops. In the biological brain, growth, organization, and learning happen in tandem. At each new stage of development our brain acquires new capabilities (common sense, logic, language, problem-solving, planning, math) and as we grow older, our capacity to learn changes. In their current form, computational ANNs suffer from serious drawbacks such as their need for very large, labelled training data or their lack of handling on-going changes in the environment they are exposed to. They do not have the biological brain’s capacity to generalize skills across many tasks and to unseen scenarios or to adapt to changes caused by aging. Despite these shortcomings, artificial neural networks have proven to be extremely efficient at specific tasks where the training data is available in enough quantity and the sources that the model will need in solving a problem. We are, however, a long way off from achieving human-like intelligence and it is questionable whether the computational approach taken by current ANN models will ever get us there. The genetic dynamics of brain development introduce a higher level of complexity, implicating that a new generation of computational methods is required for realizing AGI.
Assessing Future Directions in AI
Today’s proven AI applications are mostly focused on data analytics and machine supported pattern recognition, providing better marketing communication and improved decision making across many industry segments. These heavily data-driven applications are continuously improved with new AI technologies, for example ‘transformer models’ as realized by OpenAI and its highly publicised GPT-3 offering. In contrast, Turing-Award winner Yoshua Bengio and other well-known AI-researchers comment that the application of massive computational resources, processing all data stored on the internet, is a dead-end for achieving human-like intelligence. There is a strong momentum within the AI-research community that new computational concepts are required, stipulating that brain-functionality is as important as data-availability. To overcome the limits of current AI-methods, ongoing brain research is providing new insights into the functionality of the human brain. Existing methods of mapping brain regions need to be revised, especially as the functionality of the brain’s synaptic and dendric network seems to be far more complex as currently modelled with ANNs.
Present day concepts of mimicking the human brain for achieving human-like intelligence need an overhaul. As a result of the huge research efforts engaged in cracking the neural code, we are likely to witness the deployment of new AI-products and services within this decade. New concepts of personal assistants, augmenting the limits of our own intelligence, are just one example of what to expect. True human-like intelligence will become the new benchmark, disrupting AI-industries and applications where the focus on data alone has reached the saturation point. Economic concerns and the continuing demand for higher productivity will drive this change. Likewise human concerns as we experience them today – with issues such as guaranteed income, distribution of wealth, ethics and government control – will intensify.