Picture Credit: Wikipedia
Introduction
‘Superintelligence: Paths, Dangers, Strategies’ is a 2014 bestselling book by the Swedish philosopher Nick Bostrom from the University of Oxford. He argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists could follow and that the outcome could be an existential catastrophe for humans.
In a series of interviews with 95 researchers conducted by Nick Bostrom’s team and partners, 90% of all respondents declared that in their opinion superintelligence would arrive starting 2075 while 50% thought this would happen by 2040.
The underlying assumption of this view is that superintelligence arrives as a continuum across all facets of life and all technological infrastructures more or less at the same time. Following evolutionary theory intelligent machines will create more intelligent machines at a pace humans can no longer control. Humanity faces the danger of becoming the slave of a system that generates its own values and wills.
Economic reality suggests that innovations and their successful entry into the market as products or services do not follow a continuous path. For every start-up at least five fail because the market does not accept the innovation, the venture is underfinanced or the management team is not up to the task of handling rapid growth.
Existing variations of AI are currently ‘narrow-minded’ and can only do the things that programmers tell them, either explicitly or through machine learning. That’s why this kind of AI is also referred to as artificial narrow intelligence (ANI). Artificial general intelligence or general AI (AGI) is the concept of an AI system with human-level intelligence and cognitive abilities that can perform a broad range of tasks and apply that knowledge to solve unfamiliar problems without being specifically trained on those tasks. The step towards artificial superintelligence implies that AGI systems perform better than humans. To go from artificial narrow intelligence to superintelligence will be specific, problem oriented, not universal and following the rules of economics governed by humans. It is up to humans to decide how superintelligence is applied. So far only science fiction has created scenarios that reflect Nick Bostrom’s concern.
Superintelligence has arrived
On January 20th 2017, Google’s DeepMind division submitted a paper entitled “PathNet: Evolution Channels Gradient Descent in Super Neural Networks” and 3 days later another paper appeared entitled “Learning to Reinforcement Learn”, submitted by DeepMind researchers in collaboration with UCL London.
While these papers provide a spotlight on the latest trend in deep learning research – the desire to merge Modular Deep Learning, Meta-Learning and Reinforcement Learning into a single solution that leads to the creation of even more capable deep learning systems – what makes this research special is the fact that it represents DeepMind’s effort to become the first company to build the first artificial general intelligence (AGI) solution. Perhaps even more significantly though is the fact that many experts also believe that once AGI is achieved then superintelligence won’t be far behind.
DeepMind’s experiments show that a neural network trained on a second follow-up task learns faster than if the network was trained from scratch. The researchers call this “transfer learning,” where previous knowledge is reused in new ways. Furthermore, as these learning systems improve, and as they are capable of doing new things, it’s also likely that in the future they’ll be able to use less computing power, and again that means that the whole cycle will accelerate as they draw on the skills of thousands, or maybe even millions, of “sub” neural networks at a time.
Following these January 2017 research publications, Google CEO Sundar Pichai introduced a project called AutoML a few months later, stating that Google researchers have shown that their learning algorithms can automate one of the trickiest parts of the job of designing machine-learning software. In some cases, their automated system came up with designs that rivals or beats the best work of human machine-learning experts. Machine-learning experts are in short supply as companies in many industries rush to take advantage of recent advancements in the application of artificial intelligence. Google’s CEO says one solution to the skills shortage is to have machine-learning software take over some of the work of creating machine-learning software.
Superintelligence has arrived as an answer to resolve the problem of human resource shortage in the generation of machine learning software. While philosophical debates continue regarding the future of humanity and the impact of AI, Google has come one step closer to realizing its strategy. The result is competitive strength and profits in years to come.
Another demonstration of superintelligence took place on May 23, 2017 as an improved version of Google/DeepMind’s software AlphaGo narrowly beat the world’s best player, 19 year old Ke Jeie in the ancient Chinese board game of Go. To the developers of AlphaGo this reaffirmed the arrival of what they consider a groundbreaking new form of artificial intelligence.
AlphaGo stunned the Go community and futurists a year ago when it beat South Korean grandmaster Lee Sedol four games to one. That marked the first time a computer program had beaten a top player in a full contest and consequently was hailed as a landmark for artificial intelligence.
After his defeat, a visibly flummoxed Ke Jeie – who last year declared he would never lose to an AI opponent – said AlphaGo had become too strong for humans, despite the razor-thin half-point winning margin. Ke vowed never again to subject himself to this “horrible experience” losing against a machine.
For some, superintelligence implicates sci-fi images of a “Terminator” future in which machines “wake up” and enslave humanity. But DeepMind founder Demis Hassabis dismisses such concerns.
“This isn’t about man competing with machines, but rather using them as tools to explore and discover new knowledge together. Ultimately, it doesn’t matter whether AlphaGo wins or loses … either way, humanity wins.”
The Human Factor
The drive towards superintelligence cannot be stopped. In some specific applications such as automatic software generation or gaming, super-intelligent self-learning machines outperform humans already today. Most likely superintelligence will continue to be reached in specific areas of our daily lives and in contrast to an evolutionary continuum potentially threatening human existence as a pandemic virus might do.
Intelligence is not a single dimension, so “smarter than humans” as an overall attribute is meaningless. There are other aspects of the human mind besides intelligence that might be relevant to the concept of superintelligence:
- consciousness: To have subjective experience and thought.
- self-awareness: To be aware of oneself as a separate individual, especially to be aware of one’s own thoughts.
- sentience: The ability to “feel” perceptions or emotions subjectively.
- sapience: The capacity for wisdom.
To develop these features requires interaction with other human beings and the experiencing of real-life situations. Interactive expert-sessions with specialized bots (text or voice driven interaction software) and enhanced with machine learning might help to support one’s expansion of consciousness, providing rules of ethics and privacy are adhered to. However, there is no motive to extend this AI level to superintelligence unless we want to create a new species of robots and cyborgs.
Military organizations are working on the deployment of super-intelligent robots for applications such as reconnaissance in enemy territory or the use of unmanned combat vehicles and airplanes to conduct warfare. To apply technology in military applications and missions has a long history. Superintelligence makes warfare more efficient and delivers potential advantages to those who are better equipped but so far humans go to war and machines are just tools offering a better chance for victory.
One advantage humans have is the ability to correlate various topics and to apply creative thinking to solve new problems using a brain that weighs about 1 ½ kg and consumes 20 Watts of energy. Machine learning applications, mimicking parts of the human brain, employ thousands of silicon-based processors, mounted in racks weighing hundreds of kilos, consuming energy in the Kilowatt range. Analogue to the positive consequences of bio-diversity, human-diversity as a resource acquired over thousands or millions of years provide advantages that superintelligence alone cannot match. Human motivation and free will is the key that differentiates us from intelligent machines. The concern that superintelligence will create machines with their own free will, possibly acting against the interest of humanity is a suicidal vision carried out on behalf of humans.
So what comes next?
One of the key issues posed by superintelligence involves the generation of knowledge with complex relationships humans no longer can comprehend. So far, brain-based thinking and human communication, possibly enhanced with analytical support by existing computational models, has led to today’s knowledge society. Knowledge generated by super-intelligent systems, utilizing steadily improving algorithms and collecting more and more data about nature and humans will surpass the knowledge which humans so far have generated with their own intellect.
Dealing with this new ‘extended knowledge’ generated by super-intelligent machines represents new risks and opportunities in our value-based economy. To apply superintelligence in a positive way, one of human’s main tasks will be to ask creative questions. To learn how to do this and take advantage of extended knowledge, our educational system needs to be revamped, fostering human diversity instead of building knowledge silos. Let machines generate new knowledge while humans think about its implications asking creative questions to solve problems.
Interesting article, congratulations.
Obviously, the two main characteristics of any superintelligence´s plan would be:
1) For it (the plan) to be virtually undetectable, and
2) Even if when the plan came to fruition, and a few people might detect it moments before its outcome, it would be unstoppable.
It´s like wild game unconsciously running off EXACTLY in the direction the hunter wants – straight into the trap – and when a few brighter ones see the net and realize they are going to get caught, it´s too late to change course.
And here we are. Rushing as fast as we can to create an intelligence that is superior to ours. And we do not want to stop it. And even if we did, we cannot stop it. It´s too late to change course.
Ring a bell? Didn´t think so. But the phenomenally intelligent Dr. Isaac Asimov figured it out in his short story “Darwinian pool room” back in the 1950´s. (And which I read back in the 1970´s)
Understand? You don´t see it, but even if you did, it´s unstoppable. THAT´S what superintelligence is and does. So much for…ah…”free will”…animals and their illusions…lol.