Can Artificial Intelligence replicate Human Consciousness?

Posted by Peter Rudin on 24. March 2017 in Essay

  Picture Credit: JungMinded.com

Introduction

The debate among neuroscientists, AI engineers and philosophers over one of the most ultimate of scientific questions – the nature of human consciousness – is gaining momentum. This is largely due to the accelerating progress in brain research and machine learning. When contemplating the impact of Artificial Intelligence (AI) on humans the discussion is bound to touch on consciousness, an issue which has occupied modern philosophers since Descartes’s dualistic concept separating mind and matter publicized in the early 17th century.

Tackling the problem of mind and brain, many prominent researchers today advocate a universe fully reducible to matter. That position seems reasonable in light of neuroscience’s advances, with brilliant images of brains and neurons generated by fMRI brain scanners for scientific analysis.

However in modern physics the particles that make up a brain remain in many ways as mysterious as consciousness itself. After more than a century of profound explorations into the subatomic world, our best attempt to define how matter behaves provides little information about what matter is.

In response to many unanswered questions regarding the future of AI some of today’s top techies and scientists like Stephen Hawking, Elon Musk, and Bill Gates express concerns over apocalyptic scenarios that are likely to arise as a result of machines with their own consciousness and possibly motives that might seek to destroy us.

The issues involved discussing the future of AI are complex and require an interdisciplinary approach. The following is an attempt to provide a brief overview for further debate.

The Neuroscience View

In March 2017 at the BRAIN Initiative meeting in Maryland, Dr. Christof Koch, the president of the Allen Institute of Brain Science based in Seattle, announced the discovery of three neurons with branches that extensively span both hemispheres of the brain. These neurons sit in a brain region called the ‘claustrum’, a mysterious thin sheet of cells that Koch believes is the seat of consciousness. These results are the latest to come out of a national, concerted effort to map the projections of individual neurons throughout the entire brain with a new 3D-imaging technique developed by the Allen Institute.

Since information processing in neurons is deeply rooted in their structure, scientists believe that building a map of these connections can eventually help us crack the neural code—that is, the electrochemical language in which neurons talk to one another.

The fact that the cells were found in the claustrum is perhaps not that surprising. The claustrum, given its massive connections, may be coordinating the inputs and outputs like a “conductor of consciousness”. Bit by bit, the goal is to reconstruct the entire brain, says Koch. Other neuroscientists are more hesitant to link claustrum neurons to consciousness, but applaud Koch’s new imaging technique. “It’s quite admirable,” says Dr. Rafael Yuste at Columbia University in an interview in Nature Magazine.

If the brain is a language, we’re still learning the alphabet, remarks Rafael Yuste. But every characterization of every single neuron brings us closer to identifying key components of neural networks that control our thoughts, feelings, behavior, and yes—maybe even consciousness.

The Philosophers View

The Australian philosopher and cognition scientist David Chalmers recently warned that we might create a world without consciousness by transferring human tasks to intelligent machines, creating a world with enormous intelligence that lacks consciousness und subjective experience. We delegate our cognitive competence to intelligent machines with the result that machines become more human-like and humans become more machine-like both lacking consciousness.

The influential philosopher John Searle has discussed the issue of AI and consciousness by analogy in his famous but also controversial “Chinese Room Argument”. Published in 1980 advocating that “syntax is not sufficient for semantics” he points out that a computer can translate Chinese into another language without understanding what it is translating.

Reflecting on this, one might be inclined to ask, “If a computer can’t be conscious, then how can a brain?” After all, it is a purely physical object that works according to physical law. It even uses electrical activity to process information, just like a computer. Yet somehow we experience the world subjectively—from a first person perspective where inner, qualitative and ineffable sensations occur that are only accessible to us. Unlike digital computers, brains contain a host of analogue cellular and molecular processes, biochemical reactions and electrostatic forces, global synchronized neurons firing at specific frequencies, and unique structural and functional connections with countless feedback loops.

One of the world’s most renowned philosophers, Daniel C. Dennett, professor at Tufts University, Massachusetts, has spent five decades thinking deeply and writing about consciousness. Where many worry that robots are becoming too human, he argues humans have always been largely robotic. Our consciousness is the product of the interactions of billions of neurons that are all, as he puts it, “sort of robots”. Although Dennett accepts that such superintelligence is logically possible, he worries about our “deeply embedded and generous” tendency to attribute far more understanding to intelligent systems than they possess.

In a 2015 essay co-written with Deb Roy, a professor at the Massachusetts Institute of Technology, Dennett compared our times with the Cambrian explosion, an era of extraordinary biological innovation that occurred half a billion years ago. One hypothesis had it that the world was suddenly flooded with light, forcing animal life rapidly to evolve or — in most cases — die. Employing the Cambrian explosion as a stunning analogy, he suggests that the blinding light of transparency from digital technologies is having a similar effect on life today. “Every human institution, from marriage to the army to the government to the courts to corporations and banks, religions, every system of civilization is now in jeopardy because of a new transparency induced by the internet.” The “membranes” protecting these institutions have been permeated and we are emerging into a world where it is near-impossible to keep secrets and privacy. The effect on our psyche is severe to the point where psychologists should enter the debate as well.

The Artificial Intelligence View

Within the AI community the distinction is made between Strong AI and Weak AI.  Strong AI, by definition, should possess the full range of human cognitive abilities. This includes self-awareness, sentience, and consciousness, as these are all features of human cognition. The Weak AI hypothesis states on the other hand that devices which run on digital computer programs can have no conscious states, no mind and no subjective awareness. Such AI systems cannot experience the world qualitatively, and although they may exhibit seemingly intelligent behavior, it is forever limited by the lack of a mind and consciousness.

AI and ‘machine learning’ pioneer Prof. Jürgen Schmidhuber, Scientific Director of the Da Molle Institute for Artificial Intelligence IDSIA in Manno Switzerland is convinced the ultimate breakthrough inserting consciousness into AI machines has already happened. The simplest conceptual description of consciousness might be that it is an awareness of the self within the context of the world. But without an understanding of the underlying mechanism, consciousness remains a secret. This, according to Schmidhuber is, in part, why neuroscientists have successfully interjected themselves in the ongoing conversation about consciousness by pointing to physical phenomena within the brain. Strong AI systems try to avoid ‘pain’, and they try to maximize ‘pleasure’ because they have a built-in utility function or a reward function that they want to maximize. Humans also have such a reward function, already built-in as babies. The behavior of a strong AI system is at least qualitatively similar to what we see in higher level animals, or in humans according to Schmidhuber.

Another AI pioneer Geoffrey Hinton, cognitive psychologist and computer scientist most noted for his work on artificial neural networks over many years and an important figure in the deep learning community is convinced that we now possess AI systems that have intuition. Since 2015 Hinton divides his time working for Google as director of AI and as Professor at the University of Toronto.

In an interview published in June 2016 in Forbes Magazine he stated that in his view we have crossed a very important threshold. Until fairly recently, most people involved in AI were doing a kind of AI that was inspired by logic. The paradigm for intelligence was logical reasoning. That has completely changed with the computational power of big neural nets. Instead of programming, you are just going to get the AI system to learn everything.

The reason there is so much interest in neural nets, according to Hinton is not just the theory, it is because they work. The applications in speech recognition, object recognition, or machine translation are all very impressive. The success of Google’s deep learning software AlphaGo to beat the world champion Lee Sedol in the game of Go in March 2016 has signaled a major advance in AI systems. According to Hinton one interesting thing about the game of Go is that it was always held up as an example of something computers would not be able to do because it requires intuition: you need to be able to look at a board and decide that this is a good place to play just because it feels right. That is one big thing about neural networks that differentiate them from previous generations of logic based AI: they now perform deliberate and conscious reasoning.

Conclusion

That AI software would one day beat the best Go player was anticipated, however most AI experts believed that it would take another ten years to happen. AI has an exponential trajectory and it is vital to our society that we discuss its consequences across various academic disciplines and business sectors. Digital transformation and business disruption have become a widely publicized economic concern with all sorts of government and business initiatives offered as to how to tackle the future.

The impact of strong AI however goes far beyond economic issues as it questions the future role of humans on this planet. Elon Musk, the head of Tesla and other visionary ventures recently stated that humans have to merge with machines in order to stay relevant. It is up to us humans to define and decide on our future role in life. Today our will provides the key to differentiate us from machines. However we are also aware of the fact that humans are potentially ‘corruptible’, not just with money but also increased comfort reducing the effort to perform a task. Human consciousness provides the source of reflection how much of our intelligence should be transferred to machines. We have to decide how to maintain our own consciousness and maintain our own identity before we delegate these values to machines.

Leave a Reply

Your email address will not be published.