AI and the Mystery of the Human Brain

Posted by Peter Rudin on 30. June 2017 in Essay

Picture Credit: Randy Glasbergen

Introduction

In the mid-1940s Alan Turing, John von Neumann and a few other brilliant people, drew up the basic blueprint of the computer age. They conceived a general-purpose machine based on a processing unit made up of specialized subunits and registers, which would operate on stored instructions and data. Later inventions—transistors, integrated circuits, solid-state memory—have advanced this concept into one of the greatest tools ever created by mankind.

Now, as Moore’s Law – doubling semiconductor computing performance every 18 months – seems to be stuttering, a couple of technical advancements are dominating the discussions of computing’s future. One centers on quantum computers and their enormous computing power way beyond what has been possible so far. The other, possibly a more interesting vision, describes machines that have something like human cognition and consciousness.

One could think that AI has already reached that level as most AI applications are realized with “neural network” software mimicking the human brain. While it is true that today’s AI techniques reference neuroscience, they use an overly simplified neuron model, one that omits essential features of real neurons. Although machine-learning techniques such as deep neural networks have recently made impressive gains, they are still far from providing human intelligence, modeling features like consciousness or creativity. Trying to create machine consciousness may turn out to be the way we finally begin to understand this most mysterious human attribute. Neuroscience ‘software’ in combination with neuromorphic systems ‘hardware’ could finally unlock the secret of human intelligence.

Neuroscience on the road to uncover mysteries of the human brain

When we talk about the extraordinary capabilities of the human brain, we are usually referring to the area called neocortex. The neocortex is a deeply folded sheet some 2 millimeters thick that, if laid out flat, would be about as big as a large dinner napkin. In humans, it takes up about 75 percent of the brain’s volume. At birth, the neocortex knows almost nothing; it learns through experience. Everything we learn about the world is stored in the neocortex. It learns what these objects are, where they are in the world, and how they behave. The neocortex also generates motor commands, so when you read or write software it is the neocortex controlling these behaviors. Language, too, is created and understood by the neocortex.

For most people to understand the world in four dimensions requires a lot of imagination. But a new study has discovered structures in the brain with up to eleven dimensions – ground-breaking work that is beginning to reveal the brain’s deepest architectural secrets. Using algebraic topology in a way that it has never been used before in neuroscience, a team from the Blue Brain Project has uncovered a universe of multi-dimensional geometrical structures and spaces within the neural networks of the brain. The research shows that these structures arise when a group of neurons forms a clique: each neuron connects to every other neuron in the group in a very specific way that generates a precise geometric object. The more neurons there are in a clique, the higher the dimension of the geometric object. “We found a world that we had never imagined,” says neuroscientist Henry Markram, director of the Blue Brain Project and professor at the EPFL in Lausanne, Switzerland. “There are tens of millions of these objects even in a small part of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.” Markram suggests this may explain why it has been so hard to understand the brain. “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly.”

When the researchers presented a virtual brain tissue with a stimulus, cliques of progressively higher dimensions assembled momentarily to enclose high-dimensional holes that the researchers refer to as cavities. The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner. It is as if the brain reacts to a stimulus by progressively building a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates. The big question these researchers are asking now is whether the complexity of tasks we can perform depends on the complexity of the multi-dimensional “sandcastles” the brain can build. As neuroscience has also been struggling to find out where the brain stores its memories, one answer could be that they may be ‘hiding’ in high-dimensional cavities, Markram speculates.

From binary coded computers to neuromorphic systems

A key feature of conventional computers is the physical separation of memory storage from logic information processing. The brain holds no such distinction. Computation and data storage are accomplished together locally in a vast network consisting of roughly 100 billion neural cells (neurons) and more than 100 trillion connections (synapses).

How faithfully computers will have to mimic biological details to approach the brain’s level of performance remains an open question. But today’s brain-inspired, neuromorphic systems will be important research tools for answering it. If we could create computational machines with the energy efficiency of the brain, it would be a giant step forward. Intelligent mobile applications would rely less on cloud-access due to the ‘brain-like’ intelligence built into a smartphone. The same technology could also lead to low-power devices that can support our senses, deliver drugs, and emulate nerve signals to compensate for organ damage or paralysis.

The notion of building computers by making transistors operate more like neurons began in the 1980s with Caltech professor Carver Mead. One of the core arguments behind what Mead came to call “neuromorphic” computing was that semiconductor devices can, when operated in a certain mode, follow the same physical rules as neurons do and that this analog behavior could be used to compute with a high level of energy efficiency.

Several research groups such as those of Giacomo Indiveri from the Institut of Neuroinformatics at ETH Zurich and Kwabena Boahen at Stanford have followed Mead’s approach and successfully implemented elements of biological neocortical networks. The trick is to operate transistors below their turn-on threshold with extremely low currents, creating analog circuits that mimic neural behavior while consuming very little energy. The TrueNorth chip, developed by Dharmendra Modha and his colleagues at the IBM Research laboratory in Almaden, Calif., abandons the use of microprocessors as computational units. It is a truly neuromorphic computing system, with computation and memory intertwined implementing a very specific neuron model.

The furthest away from conventional computing and closest to the biological brain is the BrainScaleS system, which Karlheinz Meier, professor of experimental physics at Heidelberg University, in Germany, and his colleagues have developed for the Human Brain Project in collaboration with the University of Manchester (UK) led by the computer scientist Prof. Steve Furber. His team has developed a complementary system to BrainScaleS called SpiNNaker.

The Heidelberg BrainScaleS system goes beyond the paradigms of an AlanTuring machine or the John von Neumann defined computer architecture. It is neither executing a sequence of instructions nor is it constructed as a system of physically separated computing and memory units. It is rather a direct, silicon based image of the neuronal networks found in nature, realizing cells and inter-cell communications by means of modern analogue and digital microelectronics. To further support and enhance this research effort, the European Institute for Neuromorphic Computing has been established at the University of Heidelberg in May 2017. “We are just getting started on the road toward usable and useful neuromorphic systems. If we succeed, we won’t just be able to build powerful computing systems; we may even gain insights about our own brains”, states Karlheinz Meier.

Conclusion

Considering the recent progress of ground-breaking research in neuroscience and neuromorphic computing, it will be interesting to observe how AI technology companies such as Google, Facebook, Amazon, Apple and IBM are adapting their business models to the potential of truly intelligent products and services. The multidimensional temporary formation of neural networks as discovered by Markram’s Blue Brain Project team will significantly impact the development of new deep learning algorithms for knowledge generation and decision making. Mass production of neuromorphic chips could solve the energy consumption problem of our smartphones and provide local functionality towards a truly personal assistant. Hybrid system architectures, combining mobile neuromorphic computing with conventional cloud based knowledge access, could provide the base for truly intelligent services. In fact we already see a trend where the AI technology companies from Silicon Valley take control of the entire value chain, offering smartphones in combination with cloud based services. While European institutions contribute significantly to the research of human intelligence, the questions looms, as to how they can recover these investments in a market that so far is dominated by U.S technology giants.

Leave a Reply

Your email address will not be published.