Picture Credit: Infosec Institute
According to Wikipedia Reverse engineering, also called back engineering, is the processes of extracting knowledge or design information from anything man-made and copying it or reproducing anything based on the extracted information. The process often involves disassembling something (a mechanical device, electronic component, computer program, or biological, chemical, or organic matter) and analyzing its components and workings in detail.
The human brain is a product of evolution over a timeframe of millions of years. As we know the human evolution started 7-8 million years ago in the African savannah, where an upright position and walking were significantly advantageous. The main incentive for improving manual actions and tool making could have been related to the gathering of food. Thanks to more refined methods of hunting, our ancestors were able to eat more meat, providing more calories, more proteins and essential fatty acids in their diet. The nervous system needs a disproportionally high level of energy and better quality of food was a basic condition for the evolution of the human brain. The size of the human brain has tripled during 3.5 million years; it has increased from an average of 450 cm3 to today’s average of 1350 cm3. A genetic change in the system controlling gene expression could have happened about 200’ 000 years ago, which influenced the development of our nervous system, the sensor motor function and learning ability for motor processes.
The human brain is the most elementary biological component of our socio-economic system yet very little is known as to how it functions. Critical voices consider the attempt to reverse engineer the human brain the wrong approach. One example of this view involves the issue of consciousness. Many researchers share the opinion that consciousness exists yet we don’t understand what it is and how it is formed. Some researchers believe that it makes no sense to reverse engineer something we don’t understand.
Regardless of these critics, heavily subsidized efforts are in full swing to crack the neural code and to find out how our brain functions. Billions of US Dollars and the engagement of over 100’000 researchers across the globe are part of an effort which is largely directed towards health issues and the cure of brain diseases such as Alzheimer, Epilepsy or Parkinson. The methods and tools to conduct this research range from invasive brain experiments on animals, typically rodents, or research on humans with brain injuries or brain disorders consenting to such invasive experiments. Building so-called neuromorphic systems with silicon hardware simulating the spiking of biological brains as accomplished with IBMs TrueNorth chip represents another approach to solve the mystery of the human brain through reverse engineering.
Measuring brain activity
Recent progress in neuroscience is very much attributed to the invention and continuous improvement of functional magnetic resonance imaging (fMRI) which was first introduced in 1991 to conduct non-invasive research on human brains. fMRI measures brain activity by detecting changes associated with blood flow. This technique relies on the fact that cerebral blood flow and neuronal activation are coupled. When an area of the brain is in use, blood flow to that region also increases. Observable behavior is correlated with activity in the brain. This is a large-scale undertaking, with the human brain containing an estimated 86 billion neurons and 100 trillion synapses connecting the neurons.
A new technology called optogenetics is revolutionizing how we study the brain at the level of single neurons. Correlating brain activity to behavior at such a minimal level is attractive because of its innately higher resolution, with single cells offering potentially more information than entire brain regions. At its simplest, optogenetics involves firing neurons with light, providing neuroscientists a new and exciting control over neurons: when neurons fire, and which neurons fire.
Discussing brain functions typically relates to a) specific brain regions handling specific tasks like learning, speaking or memorizing and b) the ‘behavior’ of neurons performing such tasks. The tasks initiated or controlled by neurons are carried out through ‘spiking’ and ‘encoding’- electrochemical processes which are difficult to comprehend by our intuitive understanding of physics.
The brain has many functions, from controlling our muscles and voice to interpreting the sights, sounds, and smells that surround us, and each kind of problem necessitates its own kinds of codes. In the visual system, for example, rays of light entering the retina are promptly translated into spikes sent out on the optic nerve, a bundle of about one million output wires, called axons that run from the eye to the rest of the brain. Literally everything that you see is based on these spikes, each retinal neuron firing at a different rate, depending on the nature of the stimulus to yield several megabytes of visual information per second. Awake or asleep, the human brain as a whole continuously ‘fires’ neural spikes—perhaps one trillion per second. To a large degree, to decipher the brain is to infer the meaning of its spikes. But to do that we must learn how to look at sets of corresponding neurons, measure how they are firing, and reverse-engineer their message.
Brain models related to intelligence
As there is no precise definition of intelligence or intelligences it is generally accepted that economic well-being and progress is very much tied to the intellectual capacity of humans to perform tasks such as analyzing, planning, problem solving or decision making. To perform these tasks, learning, memorizing and communicating are fundamental. The brain regions called ‘Neocortex’ and ‘Hippocampus’ have been identified as the areas where ‘intelligence’ is seated. To comprehend biological intelligence, a number of thought-models have been developed with the aim to provide input in the design of artificial intelligence systems.
In his book ‘ How to Create a Mind’, published in 2012, Ray Kurzweil inventor, futurist and director of engineering at Google draws the analogy between pattern recognition as used in image recognition with a – in his view – hierarchical learning architecture of the neocortex. This hypothesis has drawn criticism from neuroscientists and psychologists alike, stating that Kurzweil’s view is much too simplistic. In practical terms Googles automatic mail response system ‘Smart Reply’ is based on a hierarchical learning model enhanced with machine learning algorithms to improve the service based on repeated usage. User acceptance will be an indicator of how effective this approach will be and if Kurzweil’s hypothesis holds up.
Another model to analyze intelligence is the so-called ‘Complementary Learning System (CLS)’ theory which was first introduced in 1992 and was partially revised in July 2016 in a paper published by McClelland, Professor of Psychology at Stanford, Dharshan Kumaran and Demis Hassabis, both from Google Deep Mind, the company that created the AlphaGo software to beat the world’s best Go player. CLS theory assumes that intelligence is based on two learning systems, one located in the neocortex and the other in the hippocampus. The first gradually acquires structured knowledge while the second quickly learns specifics of individual experiences. Together they create the memory we rely on to replay information acquired in the past. The core principles of CLS theory have broad relevance not only in understanding the organization of memory in the biological brain but also in designing artificial machine learning systems. Demis Hassabis, CEO of Google Deep Mind has repeatedly stated that research in neuroscience will enhance machine learning research towards Artificial General Intelligence (AGI), breaking the limits of current Narrow AI applications.
Today’s neural networks as applied in AI machine learning software have little in common with biological neural networks. In the context of AI, neural networks provide the software architecture to process huge amounts of data concurrently, while mathematical techniques and algorithms are used to extract ‘knowledge’ from this data, mimicking some limited functions of biological neurons. For example millions of images of cats and dogs had to be processed as ‘learning data’ before Google could provide the information that a given picture was a cat or a dog with a probability of 95% being correct. A 3 year-old child can accomplish this in seconds once a parent has shown what a real-life cat or dog looks like. In respect to image recognition, consuming only 20 watts of energy, the human brain is by far more efficient than current artificial neural networks.
Efforts to reverse engineer the human brain are being carried out by many research teams across the globe, pursuing different approaches. Most researchers believe that one day the neural code will be cracked as was the discovery of DNA in 1953, the molecule that carries the genetic instructions used in the growth, development, functioning and reproduction of all known living organisms. However there is wide disagreement when this will happen, some think it will occur in 20 years others in 70 years.
So-called ‘narrow’ AI applications, such as voice recognition, emotion-sensing, translation or decision making support fuel economic and social transition, causing disruptions to conventional businesses and organizations. Cracking the neural code is envisioned by some as the arrival of ‘superintelligence’. The human species has a long history of inventing tools to secure survival with intelligent means which in turn raises intelligence to a new level. To create intelligence with intelligence is nothing new; hence the term ‘superintelligence’ can be misleading, fostering fear of existential threats. The consequence of cracking the neural code affects our entire life, not just the ‘facility’ of intelligence. Once the code is cracked we will be far better equipped to deal with the many problems humans still face in respect to health, energy, environment, education to name just a few. We do not have to wait for AGI (artificial general intelligence) to roll back democracy or to conduct destructive AI based warfare. This can be done today. Our focus should be to define the socio-economic system to make optimal use of advancements in brain research and its implementation in intelligence-centered activities to the benefit of all humans. Meanwhile we should engage in preparing the legal and ethical foundation required to meet this objective.