The Conscious Mind Picture Credit: mysterywire.com
To understand the functionality of the brain, scientists are accumulating vast amounts of data relative to the structure and function of the brain’s approximately 86 billion neurons and the trillions of synapses connecting them. Tens of thousands of researchers are devoting massive amounts of time and energy to considering what brains do. New brain sensing technology, observing individual neuron’s behaviour, is enabling us to both describe and manipulate brain activity. And yet there is a growing conviction among neuroscientists that the complexity we face to reengineer the functionality of the human brain is touching the limits of present-day scientific knowledge and comprehension. It is hard to see where we should be going, apart from simply collecting more data or counting on the latest exciting experimental approach. As the German neuroscientist Olaf Sporns has put it: “Neuroscience still largely lacks organising principles or a theoretical framework for converting brain data into fundamental knowledge and understanding.” Despite the vast number of facts being accumulated with the support of AI and machine learning, our understanding of the brain appears to be approaching an impasse. Because the brain and machine learning systems use fundamentally different algorithms, each excels in ways the other fails miserably. For instance, the brain can process information efficiently even when there is uncertainty in the input – or under unpredictably changing conditions. In discovering how the brain works, it is not clear which brain processes might work well as machine learning algorithms or how to translate one to another. Lessons can go both ways, from brain science to artificial intelligence – and back, with AI research highlighting new questions for biological neuroscientists.
The AI-Science Approach
Despite the rapid rise of neural networks in computing, we still do not fully understand the specifics of how they work. How does each layer transform information from the previous layer to eventually determine that green boats and blue boats and sailboats and motorboats are all boats? How do networks stretch and contort to fit the new data? At this point, we do not know the optimal way to tackle the ‘how-it-works’ questions: Should we look at one artificial neuron at a time or one layer at a time? Should we study Artificial Neural Networks (ANNs) like computer scientists are studying algorithms or like physicists are modelling systems? In October last year, physicists, mathematicians, neuroscientists and computer scientists gathered in New York City to tackle some of these questions at a newly organized conference called ‘DeepMath’. Organizers hope to encourage multidisciplinary approaches to uncovering the theoretical underpinnings of deep networks in order to better understand these systems. A mathematical framework for how neural networks operate will provide clear benefits not just to machine learning researchers, who can use it to decide precisely how much data to use or what parameters to set for the best performance, but also to neuroscientists, who hope that the inner workings of artificial networks will shed light on how biological networks produce the vast array of behaviours we see every day. For all the differences between artificial and biological cells, brain networks and machine networks are trying to solve a similar problem: taking an input, transforming it layer by layer, synapse by synapse, and creating the correct output.
The Neuroscience Approach
Neuroscientists hope that a better understanding of deep networks will in turn help them understand the brain as the ultimate network. It is easier to dissect an artificial network layer by layer than a real one, so researchers are using these networks as a sort of artificial organism to decipher how neurons process information about the world. To do this, they often train networks and animals on the same task, and then look for hints as to how each solves the problem.
Neuroscientists have made considerable progress toward understanding brain architecture and aspects of brain function. We can identify brain regions that respond to the environment, activate our senses, generate movements and emotions. But we do not know how different parts of the brain interact with and depend on each other. We do not understand how their interactions contribute to behaviour, perception, or memory. “Technology has made it easy for us to gather behemoth datasets, but I’m not sure understanding the brain has kept pace with the size of the datasets”, says Jeff Lichtman, a leader in brain mapping and Professor of Molecular and Cellular Biology at Harvard University. The secret to Lichtman’s progress with mapping the human brain is machine intelligence. Lichtman’s team, in collaboration with Google, is using deep networks to annotate the millions of images from brain slices their microscopes collect, an approach also referred to as connectomics. Each scan from an electron microscope is just a set of pixels. Human eyes easily recognize the boundaries of each neuron in the image (a neuron’s soma, axon, or dendrite) and with some effort can tell where a particular bit from one slice appears on the next slice. This kind of labelling and reconstruction is necessary to make sense of the vast datasets in connectomics. This effort has traditionally required armies of undergraduate students or citizen scientists to manually annotate all images. With Artificial Neural Networks (ANNs) trained on image recognition, an effort that took months or years to complete is now carried out in a matter of hours or days. Despite this major step forward, scientists still need to understand the relationship between those minute anatomical details and the dynamical activity profiles of neurons—the patterns of electrical activity they generate—something the connectome data lacks.
Are we reaching a ‘dead-end’ of scientific understanding and comprehension?
Whatever approach is chosen, both AI and Neuroscience are confronted with the question if the complexity of the human brain in general and its source of intelligence in particular can ever be decoded with present-day science. How brain connections relate to human behaviours is an active area of research and possibly a task which we are decades away to resolve. Why are Artificial Neural Networks (ANNs) so simple compared to real brains? Could we not improve their performance simply by making them more similar to the architecture of a real brain? The barrier to following this path is complexity. Individual biological neurons are extremely complicated: They have dendritic compartments that are independent, they have all these different input-output channels. A single neuron might even itself be a network. “Maybe there’s something fundamental about the idea that no machine can have an output more sophisticated than itself,” Lichtman said. “What a car does is trivial compared to its engineering. What a human brain does is trivial compared to its engineering. We have this false belief there’s nothing in the universe that humans can’t understand because we have infinite intelligence. I think the word ‘understanding’ has to undergo an evolution,” Lichtman said in a recent interview. Most of us know what we mean when we say, ‘I understand something.’ It makes sense to us. We can hold the idea in our heads. We can explain it with language. But if I asked, ‘Do you understand New York City?’ you would probably respond, ‘What do you mean?’ If you cannot understand New York City, it is not because you cannot get access to the data. It is because so much information is correlated at the same time. That is what is happening in the human brain. There are millions of things happening simultaneously among different types of cells, neuromodulators, genetic components, things from the outside. There is no point in saying, ‘I now understand the brain’, just as you would not say, ‘I now get New York City.’
There are several theoretical approaches to brain functions, including one of the most mysterious things the human brain can do – producing consciousness. But so far none of these theories is widely accepted, and none has yet passed the decisive test of experimental investigation. It can be argued that there is no possible single theory of brain function because a brain is not a single thing. The implications of machine intelligence, for the process of doing science and for the philosophy of science, can be immense. With predictions obtained by methods that no human can understand, can we deny that machines might have better knowledge? If prediction is in fact one of the primary goals of science, how should we modify the scientific method and the algorithms that for centuries have allowed us to identify errors and correct them? If we give up on understanding the complexity of the human brain, is there a point in pursuing scientific knowledge as we know it? We tend to assume that our perceptions — sights, sounds, textures, tastes — are an accurate portrayal of the real world. Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one:
The world presented to us by our perceptions is nothing like reality
So far, we lack the science that can tackle the ‘reality’ for decoding the human brain. While scientific progress and the innovation of new products and services continue at an ever-accelerating speed, we might reach the point where the human capacity to absorb change requires a new evolutionary mindset to unlock the mysteries of the brain.