Oxford University Press Book cover First Edition 2014
In his 2014 bestselling book ‘Superintelligence: Paths, Dangers, Strategies’ the Swedish philosopher Nick Bostrom from the University of Oxford argued that if machine intelligence surpasses the general intelligence of humans, then this new superintelligence could replace humanity as the dominant lifeform on earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists could follow and that the outcome could be an existential catastrophe for humans. In a series of interviews with 95 researchers conducted by Nick Bostrom’s team and partners, 90% of all respondents declared that in their opinion superintelligence would arrive starting 2075 while 50% thought this would happen by 2040. Following evolutionary theory, intelligent machines will create more intelligent machines at a pace humans can no longer control. Hence, humanity faces the danger of becoming the slave of a system that generates its own rules and values.
Looking back, the question could be raised, if superintelligence as described in Bostrom’s book is science fiction, although at that time the book was not regarded as being part of this genre of literature. Looking forward, the question looms if today’s Artificial Intelligence (AI) is on the verge of becoming science fiction?
What is Science Fiction
According to Wikipedia science fiction is a genre of speculative fiction which typically deals with imaginative and futuristic concepts of advanced science and technology such as space exploration, parallel universes, sentient AI and certain forms of immortality like mind uploading. Science fiction literature has predicted several existing inventions, such as the atomic bomb or robots. Hence, science fiction has also been called the ‘literature of ideas’, exploring the potential consequences of scientific, social, and technological innovations. Science fiction novels should not be completely unbelievable, otherwise they might be considered a product of fantasy. Science fiction usually explains what consequences new discoveries, events and scientific developments will have on humanity. As a result, science fiction novels often relate to a different world or a different universe to which individuals will be exposed within the near future. Written between 1895 and 1897, H.G. Wells’ ‘The War of the Worlds’ is one of the earliest stories to detail a conflict between mankind and an extra-terrestrial race. It was memorably dramatized in a 1938 radio programme directed by Orson Welles that allegedly caused public panic among listeners who did not know the reported events were fictional. The novel also influenced the work of scientist Robert H. Goddard, who – inspired by the book – helped develop the liquid-fuelled rocket which was used in the Apollo11 moon-landing 71 years later. Most remarkable are the contributions of Orwell and Huxley that were published in the late 1950s. Especially Orwell’s ‘1984’ supports Nick Bostrom’s scientific theory of Superintelligence with its potentially devastating impact on humanity. The contribution of science fiction writer and retired Professor Vernor Vinge adds another apocalyptic milestone. In his 1993 paper ‘The Coming Technological Singularity’ he forecasts that within thirty years, we will have the technological means to create superhuman-intelligence. Shortly thereafter the human era, as we know it, will come to an end.
Today’s Reality based on Yesterday’s Science Fiction
George Orwell and Aldous Huxley were two visionary science fiction authors who met at a young age at the prestigious Eton College in the UK. Their masterpieces ‘Brave New World’ and ‘1984’, both published more than 70 years ago, are – from today’s point-of-view – highly realistic and not fictious. Data storage, fake news, designer babies, the massive use of antidepressants – today’s reality has almost caught up with yesterday’s science fiction. The similarities are so striking that ‘1984’ was again on the US bestseller lists following the election of Donald Trump as President. Both, Huxley and Orwell, proved to be brilliant visionaries of the future although they came from two completely different worlds. Huxley came from a British intellectual dynasty, while Orwell had grown up in poor circumstances. Orwell read ‘Brave New World’ shortly after its publication and sent his novel ‘1984’ to Huxley immediately after publication, stating that he considered ‘Brave New World’ to be of fundamental importance. He argued, however, that the future could not be reduced to a policy of sheer violence. Huxley argued that his vision of the future was a ‘perfect’ dictatorship based on scientific methods, in which individuals are programmed to serve and even love their slavery. Orwell, on the other hand, considered totalitarian controls and the deliberate use of lies and permanent surveillance as the driving force that will destroy any democratic principles and respect for humanity.
Today’s Status of Superintelligence
Superintelligence, defined as a scientific method by means of which machines outpace human’s brainpower, is in high demand. In some specific applications such as automatic software generation or gaming, super-intelligent, self-learning machines outperform humans already today. Most likely superintelligence will continue to be reached in specific areas of our daily lives and be typically experienced as an evolutionary development. In contrast Nick Bostrom views superintelligence as a threat to human existence similar to a pandemic virus. However, there are other aspects of the human mind besides intelligence that might be relevant to the concept of superintelligence as well: Consciousness for subjective experiences and thought, self-awareness to be aware of oneself as a unique individual and sentience as ability to ‘feel’ perceptions or emotions subjectively. To develop these features requires interaction with other human beings and the experiencing of real-life situations. Interactive expert-sessions with text- or voice-driven bots, supported by and enhanced with machine learning, might help to support one’s expansion of consciousness, providing rules of ethics and privacy are adhered to. As long as governmental regulations are in place, only a movement similar to the French Revolution could make room for Nick Bostrom’s vision. Yet, analogue to the positive consequences of bio-diversity, human-diversity as a resource acquired over thousands or millions of years, provides advantages that superintelligence alone cannot match. Considering the exponential path of AI-Machine development, one is indeed challenged to discuss the displacement of brain-power with machines. From this perspective Nick Bostrom’s scientific theory of superintelligence might see a revival which needs to be taken seriously.
Is AI turning into Science Fiction?
In an article just published by Medium, Cassie Kozyrkov, Chief Decision Scientist at Google, makes the point that despite all the excitement about ChatGPT, language is a slippery eel and there’s no law against people using the same word to mean different things in different contexts. As long as the meaning of words can be interpreted in different ways, there is no easy solution to map reality. The term ‘AI’ is used to refer to a specific way of turning data into computer code. When you observe engineers automating a task using patterns in data without looking up the answer directly, they are probably using artificial neural networks and machine learning to solve their problem. If a very complex problem is to be solved applying a special type of machine learning algorithm, then it is customary to call it Deep-Learning AI. The engineers doing serious work to develop these AI-applications are fully aware that they are not creating anything new that would correlate with views and facts produced by science fiction idealists or professors of neuroscience. “Will AI fully enter the realm of science fiction and begin to change everything?”. One might answer ‘no’ by simply reminding oneself of any reasonable definition of science fiction presented above and one might also answer ‘no’ because applied AI has not much to do with science fiction. So far only science fiction has created scenarios that reflect Nick Bostrom’s scientific concern.
What can be observed, however, is that the interaction between science and science fiction has, at least so far, not produced the apocalyptic scenarios predicted by science fiction. In fact, it seems that this interaction benefits scientific and technological progress. Yet we are possibly approaching a point of no return and it is only a question of time until we might experience the runaway-effects of superintelligence predicted by Nick Bostrom. Anywhere between 2040 and 2075 we will know if this critical assessment will prevail.
Knowledge generated by super-intelligent systems, utilizing steadily improving algorithms and collecting more and more data about nature and humans will surpass the knowledge which humans so far have generated with their own intellect. The interaction between science fiction and reality has advanced human’s knowledge in positive as well as negative terms. The only factor that may differentiate science fiction from reality is the human capacity for creativity. As long as human thought remains the source of new knowledge represented and mapped by AI-machines, our capacity to maintain control over our future remains intact, provided governments apply the necessary regulatory and ethical controls. Taking this point-of-view superintelligence can be used in a positive way. Let machines generate new knowledge while humans ask creative questions to solve the many problems with which mankind is confronted.