What happens when Artificial equals Human Intelligence?

Posted by Peter Rudin on 3. November 2017 in Essay

Age of Enlightenment. Picture Credit Wikipedia Commons


AI continues to enjoy extensive media coverage as progress in machine learning contributes heavily  to the development and improvement of applications such as image recognition, language translation, health analysis, emotion sensing, autonomous driving or beating the world’s best Go player to name just a few. Initiated in 1956 at a conference at Dartmouth College, AI has seen many set-backs largely due to overoptimistic expectations relative to problem solving. For example, large government funding to develop so-called ‘expert systems’ came to a sudden halt when potential users realized that the complexity to manage these systems far outstripped their usefulness. The software-concept of neural networks with its ability to learn from processing huge amounts of data coupled with an unprecedented increase in hardware performance at low cost is finally setting the stage for AI to reach human-level intelligence. Today specialized algorithms are applied to solve individual problems such as speech recognition, a technology also defined by the term ‘narrow AI’. Tomorrow we will see the application of so-called ‘Artificial General Intelligence (AGI)’ which will produce tools and algorithms to apply artificial intelligence across all levels of human activity. Very much driven by ongoing progress in neuroscience to crack the neural code of intelligence, most scientists seem to agree that this moment of intelligence-equality also referred to as ‘Singularity’ will happen within the next 15 to 65 years.

What is the fate of humanity after this milestone is reached? Before we try to answer this question we should keep in mind that intelligence is only part of what human existence is all about. There is no doubt that our intellectual capacity has been, and is likely to remain, the major factor behind our quest for economic growth and wellbeing. Yet besides intelligence our physical and emotional capacity are an integrated part of human existence as well. Following Maslow’s motivational theory defining the hierarchy of needs, our most basic need is for physical survival. Once that level is fulfilled we are likely to be motivated to reach the next level, eventually striving for self-actualization and self-fulfillment.


The fulfillment of these needs is likely to remain a major incentive to reaching Singularity as long as humans can decide on the course of action to get there.

Three scenarios describing the consequences of Singularity

Today we do not really know what will happen when Singularity is achieved. The scenarios presented are thought-models to serve as an inspiration for further discussion.

  • Scenario 1: As machines continuously learn from humans and previously generated knowledge, they will eventually create their own identity, far surpassing the intellectual capacity of humans. Triggered by some kind of knowledge explosion, they might seek independence from humans. This could lead to the extinction of the human species as humans are no longer necessary. Analogue to Aliens, these intelligent machines, also envisioned as humanoid robots, will create their own value-system. They might decide to leave earth and colonialize space.
  • Scenario 2: Humans decide to merge with intelligent machines, most likely via a direct brain-computer interface (BCI). Due to the high-speed access to knowledge and intelligence provided by intelligence-service providers, this combination will significantly increase the intellectual capacity of humans. This scenario also includes the possibility of mind-uploading in the quest for eternal life as envisioned by the transhumanist movement.
  • Scenario 3: Creating a new Singularity-Ecosystem enhancing human-machine partnership, fostering the strength of each without the tight interconnection described in scenario 2. This scenario further stipulates that decision-making is clearly the responsibility of humans. However the option is open to delegate decision making to intelligent machines under well-defined rules.



Looking at all scenarios presented, none is impossible to achieve from a technology point of view. While Scenario 1 stipulates fantasies in the realm of science fiction, it still deserves some serious consideration. Can an intelligent machine develop ‘free will’? This philosophical question relates to human behavior and is debatable. According to Wikipedia, free will is the ability to choose between different possible courses of unrestricted action. Some conceive free will to be the capacity to make choices in which the outcome has not been determined by past events. As intelligent machines learn from data it seems possible that machines can develop some kind of free will provided they are given the opportunity to do so.

Can an intelligent machine develop ‘consciousness’ to understand the consequences of its actions? Philosophers have been debating the mystery of human consciousness for centuries, there is no quick answer. Modern Neuroscience has picked up this question as well, for example with a fundamental theory of consciousness called the Integrated Information Theory (IIT). Developed by Christof Koch, chief scientific officer of the Allen Institute for Brain Science, in Seattle and Giulio Tononi, neuroscientist and professor at the University of Wisconsin, ITT offers hope for a principle answer to the question of consciousness in entities vastly different from us, including machines. If a new species of robots would adapt the value structure of Maslow’s motivational theory is an open question.

Scenario 2 received wide media attention, when in an interview by CNBC in February this year, visionary and entrepreneur Elon Musk made the statement that humans must merge with intelligent machines or become irrelevant in the age of artificial intelligence. In an age when AI threatens to become widespread, humans would be useless, so there is a need to merge with intelligent machines, according to Musk. He explained that computers can communicate at “a trillion bits per second”, while humans, whose main communication method is typing with their fingers, can do about 10 bits per second on a mobile device. “Some high bandwidth interface to the brain will be something that helps achieve a symbiosis between human and machine intelligence and maybe solves the control and the usefulness problem,” Musk explained. Some futurist like Ray Kurzweil, Director at Google have suggested that instead of planting electrodes by opening up the skull, one could administer nanobots into the bloodstream which are fed to the brain regions engaged in intelligence to wirelessly communicate with the outside world and its intelligent service-providers. To ‘invade’ the brain with its 80 billion neurons and trillions of synapsis to massively increase our IQ seems a rather risky venture. Having a high IQ has its advantages, for example getting a better job and/or higher salary. This may be true, but there are also potential disadvantages to being highly intelligent, ranging from loneliness to the potential of depression and lack of emotional experiences. Consequently it seems unlikely that this scenario would support our quest for self-actualization or the fulfillment of psychological needs as depicted by Maslow’s pyramid of needs.

Scenario 3 with the concept of a Singularity-Ecosystem defines an evolutionary path towards equality in machine-human intelligence, providing time to resolve a vast pool of unresolved social and political issues. In respect to economics, it promotes the possibility of applying the fundamentals of Neuro (Behavioral)-Economics as well as Digital Economics thereby reducing the risk of business disruptions and failures. The awarding of the Nobel Prize in Economics to Richard Thaler, professor in economics at the University of Chicago just one month ago, fostered once again the importance of a human-centric economy following the year 2002 when the psychologist Daniel Kahneman (Author of ‘Thinking, Fast and Slow’) was awarded the Nobel Price for similar reasons. ‘Ethics’ represents the core of the Singularity-Ecosystem. It stipulates social values which are also inherent in Maslow’s hierarchy of needs. To establish a positive working-relationship with intelligent machines to innovate new products and services can be a key motivator in our drive to reach self-actualization.


Humans have outsourced tasks to machines before. Our limited physical capability to handle heavy objects has led to sophisticated building machines in construction, transportation machines facilitate logistics and travel, and manufacturing robots reduce production costs. Outsourcing intelligence to a service provider like Google or Facebook raises the question of trust. To deal with the widespread fear that the potential of AI can be misused by governments to rage war or the concern that market-controlling institutions misuse their data-analysis power, we have to widen our scope. With the Age of Enlightenment also referred to as the Age of Reason which began with Descartes’ famous statement ‘I think therefore I am’, the scientific revolution gained momentum at an accelerating pace, still in progress today. The ideals of this movement had a profound impact on the French revolution in 1790 symbolized by the motto „Liberty, Equality, Fraternity”. It also gave birth to the Declaration of the Rights of Man and Citizen, a document combining content of the American Declaration of Independence with content from the Bill of Rights. The Declaration was a core statement of the values of the French Revolution and subsequently had a major impact on the development of freedom and democracy in Europe and other regions of the world. Moving from the age of Enlightenment to an age of Singularity in an evolutionary way implies that we have to protect and live up to these values. Otherwise Scenario 1 might just trigger the loss over our own destiny.

Leave a Reply

Your email address will not be published. Required fields are marked *