Singularity Arriving Credit:medium.com
Introduction
According to Wikipedia technological singularity, also referred to as singularity, defines the moment when technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. It will cause a rapid increase and ‘explosion’ of intelligence that ultimately will result in a powerful superintelligence, qualitatively far surpassing all human intelligence. Some experts believe that singularity is a real and imminent threat, while others believe that it is nothing more than science fiction. Ray Kurzweil with his bestselling book The Singularity Is Near, predicted it will be reached by 2045. Others, like Rodney Brooks from MIT assume that it will take at least 100 years, if at all, for singularity to be achieved. In his 1993 essay The Coming Technological Singularity, Vernor Vinge writes that by about 2023 we would have the technological means to create superhuman intelligence with the result that sometimes afterwards the human era as it exists today will end. Moving forward, Sam Altman of OpenAI argues that we will experience the arrival of a ‘Gentle Singularity’ as life continues with plenty of challenges.
Thought Models driving Singularity
As neuroscience and related behavioural analysis accelerate, the research of the human brain’s functionality is increasingly seen as a guide for advancing singularity to human-like intelligence. Being confronted with the growing complexity of handling our day-to-day life, there are a number of thought models which allow us to safely and efficiently act in the real world. One prominent example is the Dual Process Theory popularised by Daniel Kahneman’s bestselling book ‘Thinking Fast and Slow’. The Dual Process Theory postulates that human thought arises as a result of two interacting thought processes: an unconscious, intuitive response – dubbed System 1 – followed by a much more reflective reasoning response – dubbed System 2. One of the interesting aspects of the Dual Process Theory is the fact that it provides a bridge for regulating the intuitive, almost involuntary response of System 1 with a supervisory, more deliberate response of System 2. While the cerebrum region of the brain is responsible for higher cognitive functions like vision, hearing and thinking, the cerebellum region integrates sensory data and governs movement, balance and posture. A research paper recently published, describes a hybrid system combining analogue circuits that control motion with digital circuits to govern perception and decision-making. Contrary to the conventional Von Neumann computer-architecture, these systems combine computing and memory in one place, hence avoiding the time delay caused by transferring data between memory and processor and above all reduce energy and cooling requirements. Inspired by the two-systems thinking paradigm – researchers at DeepMind have developed a new agentic framework called Talker-Reasoner whereby the Reasoner updates the memory with its latest beliefs and reasoning results while the Talker retrieves this information to make decisions. As difficult problems might rely more on System 2 and everyday skills more on System 1, most cognitive processes are a mix of both kinds of reasoning. System 1 continuously generates suggestions for System 2 based on impressions, intuitions, intentions, and feelings. System 2 takes this information to make decisions based on the input from System 1. Any change experienced from being connected to the outside world will cause a repetition of this process.
The Gentle Singularity
In a new posting on June 10, 2025, entitled ‘The Gentle Singularity’, Sam Altman, CEO of OpenAI and a famed AI prognosticator states that AI will contribute to the world in many ways, but the gains to quality of life from AI driving faster scientific progress and increased productivity will be enormous; the future can be vastly better than the present. If history is any guide, we will figure out new things to do and new things to want and assimilate new tools quickly. From a relativistic perspective, the singularity happens bit by bit. However, there are serious challenges to confront along with the huge upsides. We do need to solve the safety issues, technically and societally, but then it’s critically important to widely distribute access to superintelligence given the economic implications. The best path forward might be something like:
-Solve the alignment problem, meaning that we can robustly guarantee that we get AI systems to learn and act towards what we collectively really want over the long-term (social media feeds are an example of misaligned AI; the algorithms that power those are incredible at getting you to keep scrolling and clearly understand your short-term preferences, but they do so by exploiting something in your brain that overrides your long-term preference).
-Then focus on making superintelligence cheap, widely available, and not too concentrated with any person, company, or country. Society is resilient, creative, and adapts quickly. If we can harness the collective will and wisdom of people, then although we’ll make plenty of mistakes and some things will go really wrong, we will learn and adapt quickly and be able to use this technology to get maximum upside and minimal downside. Giving users a lot of freedom, within broad bounds society has to decide on, seems very important. The sooner the world can start a conversation about what these broad bounds are and how we define collective alignment, the better.
The Future of Singularity
Some experts believe that singularity serves as a key point at which Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI) will begin to emerge and keenly showcase that we have struck gold in terms of being on the right pathway. Other experts believe that singularity will be a nearly instantaneous split-second affair, happening faster than the human eye can observe. As a result, it could be that AGI or ASI is available due to singularity. There are those who speculate that singularity might get started and then take minutes, hours, or days to run its course. The time factor is unknown. Maybe singularity will take months, years, decades, centuries, or lengthier to gradually unfurl. Some experts predict that, as a consequence of Singularity, AGI will occur by 2030. A somewhat more thoughtful approach to the gambit of date guessing is the use of surveys or polls conducted by AI experts. This wisdom of the crowd approach is an accepted form of scientific consensus. The latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040. Depending on how one interprets Sam Altman’s latest blog post, it isn’t clear as to whether AGI is happening by 2030 or 2035, or whether it is ASI instead of AGI because he refers to superintelligence. This confusion does not help to discuss the issues of singularity and its potential consequences. One element of Sam Altman’s posting that has gotten criticism especially from AI ethicists is that the era of AGI and ASI seems to be portrayed as solely uplifting and joyous. Not everyone shares this highly optimistic view. It seems that AI experts are pretty much divided into two major camps regarding the impacts of reaching AGI or ASI. One camp consists of the AI doomers. They are predicting that AGI or ASI will seek to wipe out humanity. The other camp entails the so-called AI accelerationists. They tend to contend that advanced AI, namely AGI or ASI, is going to solve humanity’s problems such as curing cancer or other health issues and to find better means to fight world hunger and poverty. In their view we will see immense economic gains, liberating people from the drudgery of daily toils. AI will work hand-in-hand with humans, and this ‘Gentle Singularity’ is not going to usurp humanity. AI of this kind will be the last invention humans have ever made, but that’s good in the sense that AI will invent things we never could have envisioned.
Conclusion
The rate of technological progress will continue to accelerate, and it will develop to the point where individuals have the option to make use of its potential based on their own preferences. There will be very difficult consequences like entire classes of jobs disappearing, but on the other hand the world will be getting so much richer so quickly that we will be able to seriously entertain and realize new ways of living which we never could before. However, one has to keep the downside of singularity in mind as well, as there is no guarantee that these positive outlook will prevail over Vernor Vinge’s devastating forecast.