Introduction
As artificial intelligence (AI) gains momentum exponentially, we are heading towards a point where 2 scenarios seem possible: one is the science fiction scenario where humans will eventually be dominated by super intelligent machines, the other is the augmented intelligence scenario where humans succeed in employing intelligent machines to their own advantage, reaching a new level of humanity. To comprehend this decision point it helps to review the trends in AI from its inception to the present day. The conclusion will be that the transformation of AI to augmented intelligence is a viable path for the future of humanity symbolized by the Singularity-Ecosystem triangle.
Historic Overview
The field of AI research was founded at a conference at Dartmouth College in 1956. The attendees, all reputable university professors like John McCarthy, Marvin Minsky, Allen Newell, Arthur Samuel and Herbert Simon, became the leaders of AI research. By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense. Herbert Simon predicted that “machines will be capable, within twenty years, of doing any work a man can do”. Simon and others failed to recognize the difficulty of some of the tasks ahead. Progress slowed and in 1974, in response to the ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of human experts. But again due to technical implementation problems, government funding for AI research collapsed, leading to what is described as the second winter in AI research.
About 10 years ago the invention of neural network software to mimic the human brain and to solve complex data analysis problems started the revival of AI. Due to an enormous increase of low-cost computing power, machine learning algorithms could be successfully applied for applications such as image and speech recognition, language translation, navigation and emotion sensing. Millions of ‘labeled’ pictures of cats and dogs, analyzed at the pixel level have been used by Google researchers to develop the algorithms that can judge if a picture represents a cat or a dog. Machine learning experts are a special, highly paid and limited breed of individuals, very much in demand by companies like Google or Facebook. Consequently the availability of software tools to support the design of machine learning applications is solving a serious bottleneck in the implementation of AI. As the processing speed of computer hardware continues to rise, we can expect further growth in the availability of machine based algorithms to solve specific problems. To extract knowledge from huge data-libraries will provide new insights into the many problems humanity still needs to resolve.
Parallel to this ‘mathematics driven’ AI movement, enormous, largely government funded efforts are under way, supporting neuroscience research to crack the neural code and to understand how our brain works. Weighing about 2.5 kg with an energy consumption of 20 Watts, the human brain is extremely efficient. In contrast machine-based neural networks require kilowatts of energy and rack space weighing hundreds of kilos to simulate just parts of a mouse brain. For most people to understand the world in four dimensions requires a lot of imagination. But a new study has discovered structures in the brain with up to eleven dimensions – ground-breaking work that is beginning to reveal the brain’s deepest architectural secrets. “We found a world that we had never imagined,” says neuroscientist Henry Markram, director of the Blue Brain Project and professor at the EPFL in Lausanne, Switzerland. Markram suggests this may explain why it has been so hard to understand the brain. “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly.”
Future Trends
While machine learning applications provide tools to solve specific problems also referred to as ‘Narrow AI’, there is a trend to build systems that have comprehensive human intelligence defined by the term ‘General AI’. Neuromorphic computer architectures which model the functioning of human brain cells and synaptic connections are currently being tested by IBM and others at the prototype level. One of the key advantages of these systems is the massively reduced power requirement compared to conventional systems as energy consumption is based on spiking similar to the way a brain is processing information. A key feature of conventional computers is the physical separation of memory storage from logic information processing. The brain holds no such distinction. Computation and data storage are accomplished together locally in a vast network consisting of roughly 100 billion neural cells (neurons) and more than 100 trillion connections (synapses).The furthest away from conventional computing and closest to the biological brain is the BrainScaleS system, which Karl Heinz Meier, professor of experimental physics at Heidelberg University, in Germany, and his colleagues have developed for the Human Brain Project. It is neither executing a sequence of instructions nor is it constructed as a system of physically separated computing and memory units. It is rather a direct, silicon based image of the neuronal networks found in nature, realizing cells and inter-cell communications by means of modern analogue and digital microelectronics. Currently BrainScaleS consists of 20 neuromorphic wafers each of which holds about 200’000 neurons and 44 million synapses. There is hope that neuromorphic systems could help to solve the mystery of human assets such as consciousness, emotions and intuition.
One potential way to augment and enhance cognitive intelligence is to employ Brain Computer Interfaces (BCI’s) providing a direct link to cloud-based data, bypassing the relatively slow sensory interfaces of speech and reading or writing. Tesla CEO Ellon Musk and his newly formed company Neuralink suggest the insertion of a lace between the skull and the brain to establish a high-speed bi-directional connection between the brain and the internet-cloud. This Cyborg concept of a human being is, according to Musk, the only way humanity can survive without being dominated by intelligent machines. However, his BCI concept violates basic human rights and opens the door to science fiction scenarios. There are other efforts under way to enhance our intelligence through non-invasive methods applied to the brain. Transcranial direct current stimulation (tDCS) is a form of neuro-stimulation that uses constant, low current delivered to the brain area of interest via electrodes placed on the scalp. Facebook is building what it calls a “brain-computer speech-to-text interface”, a technology that’s supposed to translate your thoughts directly from your brain to a computer screen without any need for speech or fingertips. The idea is that this technology will be able to take what you’re thinking to yourself in silence, using non-invasive sensors that can read exactly what you intend to say, and turn it into readable text.
Conclusion
As human individuals we are challenged to adapt to a constantly changing technology driven world. How to prepare for this raises many educational issues. Our current education system needs to be revamped as cognitive intelligence becomes a commodity. Raising the awareness of ourselves as humans and differentiating these values from the economic contribution of intelligent machines represents the major challenge we face, optimally balancing human and machine resources. Motivation to achieve a goal coupled with creativity and innovation, represent the human qualities that need support. Intelligent machines are a tool to advance humanity to a new era of existence along with productivity gains enhancing our economic welfare.
The debate about ethical standards and how to maintain them in the age of singularity is under way. Data protection and privacy laws or the call for algorithmic transparency are examples of the human concerns being addressed. There is a wide concern that basic human rights are being violated by the intrusion of AI technology into our daily lives and gaining control over our own decision making. Trying to influence us making a buying decision or voting on a political issue is nothing new. AI coupled with information about our personality and behavior makes influencing far more efficient to the point where we wonder if a decision we make is really our own. Another major concern is that governments lack the expertise and the dynamics to keep up with the ‘protection’ of humanity and our cultural heritage and identity. Cyberattacks or the totalitarian, non-democratic application of AI as it could emerge in China are likely to pose a much bigger threat to our western civilization compared to science-fiction scenarios that question human survival in the age of singularity.
There is a wide concern that singularity will result in massive job losses across many industry and business segments. Back in the 1920’s the mechanization of farming with tractors and harvesting machines had significantly increased farm productivity, causing an oversupply of products and massive farmworker layoffs. In 1910 about 33% of the U.S. workforce representing 38 million workers was engaged in farm work. By 1950 only 10% of Americans worked on farms, by 2010 farmers accounted for just 2% of the U.S. workforce. The productivity gains due to mechanization were so significant that heavy government intervention was needed to avoid the collapse of the farming industry. Many of the farmworkers, after being retrained, found work in the rapidly growing car manufacturing industry. In analogy to the disruption to farming in the 20’s, new socio-economic models might be required with features such as ‘guaranteed income’ or ‘reeducation bonus’ to provide support during the transition towards singularity. Once the transition from AI to augmented intelligence is accomplished, our creativity will define new jobs and services which currently do not exist.
It is interesting topics. I am very appreciated if I can recent update regarding the singularity 2030