The exponential growth of data, sensors and computer performance, coupled with the development of sophisticated neural networks and algorithms, results in a steady augmentation of human intelligence with two possible consequences:
- It increases our dependency on AI based tools, potentially corrupting our integrity without us consciously realizing it.
- It supports the creation of new knowledge to solve problems beyond the constraints of existing human knowledge.
As a result, humans might develop a sense of uselessness and loss of control over their own fate and destiny. This potential erosion of human integrity does not happen in one gigantic leap, yet change is accelerating and there are symptoms of social-economic unrest which need to be discussed.
The case of Google Deep Mind’s ‘AlphaGo Zero’
Artificial intelligence research has made rapid progress in a wide variety of domains, from speech recognition and image classification to genomics and drug research. In many cases, these are specialist systems that leverage enormous amounts of human expertise and data. However, for some problems this human knowledge may be too expensive, too unreliable or simply unavailable. As a result, a long-standing ambition of AI research is to bypass this step, creating algorithms that achieve superhuman performance in the most challenging domains with no human input.
To prove this point, we can look back to Google Deep Mind’s success in beating the world’s best Go player in March 2016. In late 2017, as a follow-up to this historic event, DeepMind announced that it had created a software called ‘AlphaGo Zero’ that had learned to play Go without human intervention. It was able to do this by using a novel form of reinforcement learning, in which ‘AlphaGo Zero’ becomes its own teacher. Previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play GO. ‘AlphaGo Zero’ skips this step and learns to play simply by playing games against itself, starting from completely random play. In doing so, it quickly surpassed the human level of play and defeated the previously published champion-defeating version of AlphaGo by 100 games to 0.
Over the course of millions of AlphaGo vs AlphaGo games played by the computer, the system progressively learned the game of Go from scratch, accumulating thousands of years of human knowledge during a period of just a few days.
Disruption and the potential of unemployment
Fears of machines pushing people out of the job market are nothing new, and in the past such fears proved to be unfounded. As computers take over more routine cognitive jobs, new creative jobs for humans will continue to appear. Many of these new jobs will probably depend on cooperation rather than competition between humans and AI. Human-AI teams will likely prove superior not just to humans, but also to computers working on their own. However, most of the new jobs will demand high levels of expertise and ingenuity, and therefore may not provide an answer to the problem of unemployed unskilled laborers. Moreover, as AI continues to improve, even jobs that demand high intelligence and creativity might gradually disappear. All of this leads to one important conclusion: The AI revolution will not consist of a single watershed event, after which the job market will settle into some new equilibrium. Rather, it will be a cascade of ever bigger disruptions. Old jobs will disappear, and new jobs will emerge, but the new jobs will also rapidly change and vanish. People will need to retrain and reinvent themselves not just once, but many times. Just as in the 20th century governments established similar education systems for young people, in the 21st century they will need to establish massive re-education systems for adults. But will that be enough? As job volatility increases, will people beyond 50 be able to cope? Intuitively the answer is ‘no’. A class of people might emerge that feels useless, lacking the mental stamina to continue learning new skills.
Indications of mental health problems
From the medical front, signs exist that we are in the midst of an emerging crisis — one that has not yet been recognized in its full breadth, even though it lurks just beneath the surface of our casual conversations and the undercurrents of our news feeds. While the many aspects of cognition — such as memory, attention, perception and emotional regulation — appear intact on the surface, there are indications that their dysfunction are manifestations of a fundamental crisis. The prefrontal cortex – the most evolved region of the human brain – supports our cognition. Conversely its dysfunction has been associated with symptoms of virtually every neuropsychiatric condition, from depression to ADHD. Historically human cognition, over thousands of years, has emerged to support our success of survival in an increasingly complex and competitive environment. According to Adam Gazzaley, Professor of Neurology at the University of California, San Francisco, our brains simply have not kept pace with the dramatic and rapid changes, specifically the introduction and ubiquity of information technology. In numerous laboratory tests, scientists have documented the negative influence of information overload on attention, perception, memory, decision making, and emotional regulation. Real world experience shows strong associations between the use of technology and rising rates of depression, anxiety, suicide, and attention deficits, especially in children.” We need better brains to manage the deluge of information we consume on the internet. We need to elevate the maturity of our consciousness to thrive in this new environment and enhance cognition”, writes Adam Gazzaley in a recent essay.
The consequences of transferring authority to machines
The increasing efficiency of algorithms will continue to shift more and more authority from individual humans to AI machines. We might willingly give up more and more authority over our lives, because we will learn from experience to trust the algorithms more than our own feelings, eventually losing our ability to make many decisions for ourselves. Within a mere two decades, billions of people have come to entrust Google’s search algorithm with one of the most important tasks of all: finding relevant and trustworthy information. As we rely more on Google for answers, our ability to locate information independently diminishes. Already today, “truth” is defined by the top results of a Google search. Humans are used to experience life as an endless string of decision making. But what will happen as we rely on AI to make ever more decisions for us? Now we trust Netflix to recommend movies and Spotify to pick music we like. But why should AI’s helpfulness stop here?
The Australian philosopher and cognition scientist David Chalmers recently warned that we might create a world without consciousness by transferring human tasks to intelligent machines, creating a world with enormous intelligence that lacks consciousness und subjective experience. We delegate our cognitive competence to intelligent machines with the result that machines become more human-like and humans become more machine-like both lacking consciousness.
Will quantum physics support humans to manage AI?
Ever since the famous mathematician/physicist Sir Roger Penrose published his best-selling book “The Emperor’s New Mind” in 1989 and its sequel “Shadows of the Mind” in 1994, there is an intense discussion going on within philosophical circles and especially the AI community about the role and effect quantum physics, quantum biology and quantum theory in general could have for AI and whether quantum theory can possibly improve the understanding of the workings of our brains. Penrose essentially believes that current day computers and hence AI can never reach the highest levels of human intelligence because human understanding is non-computational and hence exceeds the capabilities of machines. He also argues that human consciousness can only be explained when taking quantum effects in our brains into account. It is widely accepted that consciousness is in some way correlated to the behavior of the material brain. Since quantum theory is the most fundamental theory of matter that is currently available, it is legitimate to question whether quantum theory can help us to understand consciousness. There can be no reasonable doubt that quantum events occur and are efficacious in the brain as elsewhere in the material world—including biological systems. But there is a controversy among scientists, whether these events are relevant for those aspects of brain activity that correlate with mental activity, intelligence and consciousness.
How to proceed?
To start, we need to put a much higher research priority on understanding how the human mind works—particularly how our own wisdom and compassion can be cultivated and how we reach a mature level of consciousness. Secondly, by applying quantum mechanics, which deals with the behavior of nature at atomic and subatomic levels, we may be able to unlock some clues, possibly realigning the role of human brains in the process of providing intelligence and consciousness. Quantum theory has solved many questions of classical physics and has opened the door to new and interesting, previously unthinkable applications. One of the crucial questions concerns the possibility that all components of the nervous system show quantum macroscopic behaviors, such as ‘quantum entanglement’. Led by Professor Mathew Fisher, a world-renowned expert in the field of quantum mechanics, an international collaboration of researchers will investigate the brain’s potential for ‘quantum computation’. If the question of whether quantum processes take place in the brain is answered in the affirmative, it could revolutionize our understanding of brain function and human cognition. This would definitely mark the beginning of a new era in assessing the potential contribution of humans compared to the contribution of AI in problem-solving tasks.