Picture Credit: Wharton School-University of Pennsylvania
Currently, most AI systems are based on layers of mathematics that are only loosely inspired by the way the human brain works. Different types of machine-learning applications, such as speech recognition or the identification of objects in an image, require different mathematical structures, and the resulting algorithms are only able to perform very specific tasks. Building AI that can perform general tasks, rather than niche ones, is a long-held desire in the world of machine-learning. But the truth is that expanding those specialized algorithms to something more versatile remains an incredibly difficult problem, in part because human traits like inquisitiveness, imagination, and memory do not exist or are only in their infancy in the world of AI. In a paper that was published in June 2017 in the journal Neuron, Demis Hassabis of Alphabet’s DeepMind company and three co-authors argue that only by better understanding human intelligence can we hope to push the boundaries of what artificial intellects can achieve. According to Hassabis and his colleagues an “exchange of ideas between AI and neuroscience can create a ‘virtuous circle’ advancing the objectives of both fields.” With about 600 researchers now working at DeepMind, this mindset begins to show results, creating new brain inspired models and algorithms.
What is Neuroscience?
Neuroscientists focus on the brain and its impact on behaviour and cognitive functions, or how people think. They also investigate what happens to the nervous system when people have neurological, psychiatric, and neurodevelopmental disorders. Neuroscience is an interdisciplinary science that works closely with other disciplines, such as mathematics, linguistics, engineering, computer science, chemistry, philosophy, psychology, and medicine.
The ancient Egyptians thought the seat of intelligence was in the heart. Because of this belief, during the mummification process, they would remove the brain but leave the heart in the body. The ancient Greeks were among the first people to study the brain. They attempted to understand the role of the brain and how it worked in order to explain neural disorders. According to an article in Scientific American, Aristotle, the Greek philosopher, had a theory that the brain was a blood-cooling mechanism. In the late 19th century Pierre Paul Broca, a French physician, surgeon and anatomist, discovered that different regions in the brain were involved in specific functions. The part of the brain known as Broca’s area is responsible for some speech and other functions. In the early 20th century, Santiago Ramón y Cajal, a Spanish pathologist, histologist and neuroscientist, hypothesized that the neurons are independent nerve cell units and in 1906, Golgi and Cajal jointly received the Nobel Prize in Physiology or Medicine for their work and the categorization of neurons in the brain. Since the 1950s, with the availability of MRI brain-scanning technology, research and practice in modern neurology have made great strides in understanding the functionality of the brain. Today Neuroscience is composed of many subcategories such as: Behavioural Neuroscience, Cognitive Neuroscience, Computational Neuroscience and Neurolinguistics, to name just a few.
Despite this progress and enormous efforts in funding, with over 100’000 researchers working worldwide to crack the neural code, we are still far away from understanding in detail how individual neurons, synapses and dendrites contribute to human intelligence. The complexity of solving this problem possibly requires new scientific approaches, for example the introduction of quantum theory to describe the functioning of the human brain.
Analogy between Biological Neurons and Artificial Neural Networks
While the high-level concepts of ANNs (artificial neural networks) are inspired by neurons and neural networks in the brain, the machine-learning (ML) implementation of these concepts diverges significantly from how the brain works. However, as ML is making steady progress and new complex ideas and techniques and architectures of neural networks are being developed (RNNs, GANs, GQNs etc), the gap between biological and artificial neural networks is narrowing.
From a conceptional, simplified point-of-view, a biological neuron has three components:
- The dendrites (the input mechanism) —a tree-like structure that receives input through synaptic connections. The input could be sensory input from sensory nerve calls, or “computational” input from other neural cells.
- The soma (the calculation mechanism) — this is the cell body where inputs from all the dendrites come together and based on all these signals a decision is made whether to fire an output, also described as “spike”.
- The axon (the output mechanism) — once a decision was made to fire an output signal, the axon is the mechanism that carries the signal, and through a tree-like structure as its terminal, it delivers this signal to the dendrites of the next layer of neurons via a synaptic connection.
Similarly, there is an equivalent structure in ANNs:
- Incoming connections — every artificial neuron receives a set of inputs, either from the input layer (the equivalent of the sensory input) or from other neurons in previous layers in the network.
- The linear calculation and the activation functions — these “sum up” the inputs and make a non- linear decision whether to activate the artificial neuron and fire.
- The output connections — these deliver the activation signal of the artificial neuron to the next layer in the network.
Despite these conceptional similarities between biological and artificial neurons, one should keep in mind that the complexity and robustness of biological neurons is much more advanced and powerful than that of artificial neurons and their networks. This is not just a question about the number of neurons and the number of dendritic connections per neuron— which are far larger compared to what we have in current ANNs—it also relates to issues such as:
- Power efficiency — the brain consumes significantly less power than ANNs. Research to overcome this problem ranges from biological networks using DNA and other molecules to “neuromorphic” electronic switches that try to mimic how neurons and synapses work.
- Learning from a very small set of training examples— to match the human brain, ANNs require an “intuitive” understanding of physical laws, psychology, causality, and other rules that govern decision making and behaviour.
Advancing AI with Neuroscience: Human Memory modelled Algorithms
Aha moment (stock image).
Credit: © denisismagilov / Fotolia
Humans have the ability to creatively combine their memories to solve problems and draw new insights, a process that depends on memories for specific events known as episodic memory. But although episodic memory has been extensively studied in the past, current theories do not easily explain how people can use their episodic memories. Results from a team of neuroscientists and artificial intelligence researchers at DeepMind, Otto von Guericke University Magdeburg and the German Center for Neurodegenerative Diseases (DZNE), published in the journal Neuron on September 19, 2018, provide an insight to the way the human brain connects individual episodic memories to solve problems. Taking advantage of new developments in fMRI hardware, the research team applied algorithms that allowed them to look at hippocampal circuits in fine detail during a memory association task.
For example, imagine you see a woman driving a car on your street. The next day, you see a man driving exactly the same car on your street. This might trigger the memory of the woman you saw the day before, and you might reason that the pair live together, given that they share a car.
The researchers propose a novel brain mechanism that would allow retrieved memories to trigger the retrieval of further, related memories in this way. This mechanism allows the retrieval of multiple linked memories, which then enable the brain to create new kinds of insights like these. In common with standard theories of episodic memory, the authors posit that individual memories are stored as separate memory traces in a brain region called the hippocampus. “Our data showed that when the hippocampus retrieves a memory, it doesn’t just pass it to the rest of the brain,” says DeepMind’s Dharshan Kumaran. “Instead, it recirculates the activation back into the hippocampus, triggering the retrieval of other related memories. The results could be thought of as the best of both worlds: you preserve the ability to remember individual experiences by keeping them separate, while at the same time allowing related memories to be combined on the fly at the point of retrieval. This ability is useful for understanding how the different parts of a story fit together, something not possible if you just retrieve a single memory,” says Kumaran.
The authors believe that their results could help AI to learn faster in the future. “While there are many domains where AI is superior, humans still have an advantage when tasks depend on the flexible use of episodic memory. However, if we can understand the mechanisms that allow people to do this, the hope is that we can replicate them within our AI systems, providing them with a much greater capacity for rapidly solving novel problems,” says Martin Chadwick, another researcher at DeepMind.
Creating a machine with a general intelligence similar to our own will require a wider range of technologies than the deep-learning systems that have powered many recent breakthroughs.
The research breakthroughs on episodic memory represent a major new component of brain inspired AI. The brain is one integrated system, but different parts of the brain are responsible for different tasks. We have the hippocampus for episodic memory, the pre-frontal cortex for cognitive control, and so on. One can think about deep-learning as the brain-equivalent of our sensory cortices: our visual cortex or auditory cortex. But true intelligence is a lot more than just that. DeepMind with its staff of top researchers, representing a broad range of scientific expertise, is uniquely positioned to drive AI to the next level, slowly closing the gap between math-modelled and brain-modelled AI.