Superintelligence Credit:verdict.co.uk
Introduction
On March 22,2023, based on experiments with an early version of GPT-4, Microsoft researchers reported the results of their investigation, claiming that it exhibited more general intelligence than previous AI transformer models. Given the breadth and depth of the capabilities of GPT-4, displaying close to human performance on a variety of novel and difficult tasks, the researchers conclude that it could reasonably be viewed as an early, yet still incomplete version of Artificial General Intelligence (AGI). However, the researchers also pointed out that many regulatory and ethical issues need to be resolved to take advantage of this technology and to avoid serious socio-economic problems.
From GPT-4 to Artificial Superintelligence
According to Wikipedia, artificial superintelligence is defined as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest. Artificial superintelligence does not yet exist, and that it represents a hypothetical state of AI. The intention behind it is to surpass human cognitive capacity, which is held back by chemical and biological limits of the human brain. A few experts are sceptical that it ever will exist, and some have also raised concerns that it could pose a threat to humanity. However, many AI-researchers believe that the creation of artificial superintelligence is inevitable. In general, they compare it to AI and AGI with two key differences. On one side AI is defined as narrow AI that serves a dedicated purpose. It exists and is widely used as a tool to increase productivity. It encompasses a range of intelligent machine applications that can perform tasks requiring human-level intelligence with specialized hardware and software to create new machine learning algorithms. In contrast AGI is often referred to as strong AI. It has not yet been achieved. Its goal is to enable machines to perform any tasks that require the cognitive abilities of a human being. It consists of several human assets including consciousness, decision making and sensory perception and is considered the prerequisite for achieving artificial superintelligence.
When will AI reach Superintelligence?
In a recent interview conducted by Futurism Google AI Chief Says There’s a 50% Chance We’ll Hit AGI in Just 5 Years (futurism.com), Shane Legg, co-founder of Google’s DeepMind artificial intelligence lab, confirmed his prediction made already a decade ago. Based on Ray Kurzweil’s 1999 bestseller ‘The Age of Spiritual Machines’, stating that computational power and the quantity of data would grow exponentially for at least a few decades, Legg is convinced that his prediction is still valid. However, in his view, there are two limiting factors for his prediction to be realized. The first is that definitions of AGI rely on characteristics of human intelligence which are difficult to test because the way we think is complicated. “You will never have a complete set of everything that people can do, for example processes like developing episodic memory or the ability to recall complete ‘episodes’ that happened in the past,” Legg said. But if researchers can develop tests for human intelligence with an AI model representing it, then AGI has been reached. The second limiting factor, Legg added, was the ability to scale AI training models upwards, an important point given how much financial and human resources AI companies are already investing to generate large language models like OpenAI’s GPT-4. Asked where he thought we stand today on the path towards AGI, Legg said that he thinks that existing computational power is sufficient and that the first unlocking step would be to start training AI-systems with data that is beyond what a human can experience in a lifetime. That said, Legg reiterated his personal stance that he believes that there is a 50 percent chance researchers will achieve AGI before the end of this decade, “but I’m not going to be surprised if it does not happen by then.”
Reasons Why Superintelligence may be Extremely Dangerous
According to an article Here’s Why AI May Be Extremely Dangerous–Whether It’s Conscious or Not – Scientific American, reasons are given why AI algorithms will soon reach a point of rapid self-improvement that threatens our ability to control them and poses a great potential risk to humanity. “The idea that this stuff could get smarter than people…. I thought this was way off…. obviously, I no longer think that way,” Geoffrey Hinton, one of Google’s top AI scientists, also known as ‘the godfather of AI ’, said after he quit his job in April. He is not the only one worried. A 2023 survey of AI-experts found that 36 percent fear that AI development may result in a ‘nuclear-level catastrophe’. Written and published by the Future of Life Institute, almost 28,000 individuals have signed an open letter, including Apple Co-Founder Steve Wozniak, Elon Musk as well as CEOs and high-level executives and members of AI-focused companies and research units. With this letter they are requesting a six-month pause or a moratorium on new advanced AI-development.
Why are they all so deeply concerned? In short, following their reasoning, AI-development is going way too fast. When superintelligence is achieved, AI will be able to improve itself with no human intervention. In an US-Senate hearing on the potential of AI, Sam Altman called regulation ‘crucial’. Once AI can improve itself beyond human intelligence, we have no way of knowing what AI will do or how we can control it. We will not be able to simply hit the off-switch, because superintelligent AI will have thought of every possible way that we might do that and taken actions to prevent being shut-off. Any defences or protections we attempt to build into these AI ‘Gods’ will be anticipated and neutralized with ease once AI reaches superintelligence.
OpenAI’s Vision about AGI and the Transition Towards Superintelligence
Summarizing an Essay written by Sam Altmann, CEO of Open AI, Planning for AGI and beyond (openai.com) the mission of OpenAI is to ensure that AI systems that are smarter than humans benefit all of humanity. If AGI is successful, its technology could turbocharge the global economy and support the discovery of new scientific knowledge. AGI has the potential to give everyone incredible new capabilities in a world where all of us have access to help for almost any cognitive task. However, AGI also comes with serious risk of misuse and societal disruption. Because the upside potential of AGI is so great, we do not believe that it is possible or desirable for society to stop its development. Instead, society and the developers of AGI must figure out how to get it right. As a result, we are becoming increasingly cautious with the creation and deployment of our models. Our decisions will require much more caution than society usually applies to new technologies, and more caution than many users would like to see. Some people in AI-research think the risks of AGI and successor systems are fictitious. We would be delighted if they turn out to be right, but we are going to operate as if these risks really exist. The first AGI will be just a point along the continuum of intelligence. We think it is likely that progress will continue from there, possibly sustaining the rate of progress we have seen over the past decade. If this is true, the world could become extremely different from how it is today, and the risks may be extraordinary. A misaligned superintelligent AGI could cause serious harm as it is capable enough to accelerate its own progress without human control. Successfully transitioning to a world with superintelligence is perhaps one of the most important projects in human history. However, success is far from guaranteed and it is difficult to predict its outcome.
Conclusion
To remain competitive in developing artificial superintelligence, massive human and financial resources are required, typically available and controlled by Big-Tech companies such as Google or Microsoft. Tracking and forecasting progress toward superintelligence is complicated by the fact that key steps may occur in the dark. Developers may intentionally hide their systems’ progress from competitors. In 2020, researchers demonstrated a way to provide algorithms with the ability to detect when they were being tested and to provide wrong responses. Humanity’s insatiable inquisitiveness has propelled science and its technological applications this far. It might make sense to slow down the development of artificial superintelligence at the cost of curtailing exponential growth with ethical and regulatory guidelines.