If Machines Think Like Humans, Has Singularity Finally Arrived?

Posted by Peter Rudin on 1. November 2024 in Essay

Singularity is arriving           Credit:medium.com

Introduction

According to Wikipedia technological singularity, also referred to as singularity, defines the  moment when technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. It will cause a rapid increase and  ‘explosion’ of intelligence that ultimately will result in a powerful superintelligence, qualitatively far surpassing all human intelligence. Some experts believe that singularity is a real and imminent threat, while others believe that it is nothing more than science fiction. When the singularity2030.ch website was launched about 6 years ago, the assumption was that singularity would be achieved by 2030. Ray Kurzweil with his bestselling book The Singularity Is Near, predicted it will be reached by 2045. Others, like  Rodney Brooks from MIT assume that it will take at least 100 years, if at all,  for singularity to be achieved. In  his 1993 essay The Coming Technological Singularity,  Vernor Vinge writes that by 2023 we will have the technological means to create superhuman intelligence with the result that shortly afterwards the human era as it exists today will end. However, according to Kurzweil, computational capacity alone will not be sufficient for achieving singularity. He asserts that the best way to build machine intelligence is to first understand human intelligence and  to image the brain from an inside view. Once the physical structure and connectivity are understood, researchers can design functional models of subcellular components and synapses of entire brain regions. These multiple possible paths to an intelligence explosion make singularity more likely.

Thought Models driving Singularity

As neuroscience and related behavioural analysis accelerate, the research of the human brain’s functionality is increasingly seen as a guide for advancing singularity to human-like intelligence. Being confronted with the growing complexity of handling our day-to-day life, there are a number of thought models which allow us to safely and efficiently act in the real world. One prominent example is the Dual Process Theory popularised by Daniel Kahneman’s bestselling book ‘Thinking Fast and Slow’. The Dual Process Theory postulates that human thought arises as a result of two interacting thought processes: an unconscious, intuitive response  – dubbed System 1 – followed by a much more reflective reasoning response – dubbed System 2. One of the interesting aspects of the Dual Process Theory is the fact that it provides a bridge for regulating the intuitive, almost involuntary response of System 1 with a supervisory, more deliberate response of System 2. While the cerebrum region of the brain is responsible for higher cognitive functions like vision, hearing and thinking, the cerebellum region integrates sensory data and governs movement, balance and posture. A research paper recently published, describes a hybrid system combining analogue circuits that control motion with digital circuits to govern perception and decision-making. Contrary to the conventional Von Neumann computer-architecture, these systems combine computing and memory in one place, hence avoiding the time delay caused by transferring data between memory and processor and above all reduce energy and cooling requirements.  Inspired by the two-systems thinking paradigm – researchers at DeepMind have developed a new agentic framework called Talker-Reasoner whereby the Reasoner updates the memory with its latest beliefs and reasoning results while the Talker retrieves this information to make decisions. As difficult problems might rely more on System 2 and everyday skills more on System 1, most cognitive processes are a mix of both kinds of reasoning. System 1 continuously generates suggestions for System 2 based on impressions, intuitions, intentions, and feelings. System 2 takes this information to make decisions based on the input from System 1. Any change experienced from being connected to the outside world will cause a repetition of this process.

Exponential Growth

The exponential growth in computing technology suggested by Moore’s law is commonly cited as a reason to expect singularity in the near future. Computer futurist Hans Moravec proposed that the exponential growth curve could be extended back through earlier computing technologies. Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change and evolutionary processes increases exponentially, following Moore’s law in the same manner as Moravec’s proposal, especially as applied to nano- and medical technologies. Between 1986 and 2007, intelligent machines’ capacity to compute information roughly doubled every 14 months while the computational capacity of the world’s general-purpose computers doubled every 18 months. In addition, some researchers have argued that the acceleration pattern should be characterized as ‘hyperbolic’ rather than exponential. Kurzweil contrasts singularity with the ongoing improvements in AI technologies, stating that singularity will allow us to transcend the limitations of our biological bodies and brains because there will be no distinction between human and machine intelligence. He also confirms his predicted date of the singularity (2045) at which time he expects computer-based intelligence to significantly exceed the sum total of all human brain power.

A Philosophical View

Ever since the rise and application of modern neuroscience over the last decades, there has been a controversial discussion about its potential influence on topics that were traditionally seen as part of the domain of social sciences and humanities. In philosophy, two distinct ways of dealing with the problems and prospects of neuroscience have been developed. Traditionally the philosophy of neuroscience attempted to apply methods and classical approaches from the philosophy of science. However, so-called Neurophilosophy takes a different approach by applying today’s neuroscientific findings to classical philosophical issues. The separation of mind and body – a widely accepted theory from the 17th-century French philosopher René Descartes – defined dualism as the prevailing thought model.  Since the weight of evidence indicates that all mental processes occur in the  brain, this classical mind/body separation has been replaced with questions, for example what are the brain mechanisms which explain learning and decision making. Patricia Churchland, who teaches philosophy at the University of California, is a key figure in the field of neurophilosophy, applying a multidisciplinary approach to study how neurobiology contributes to philosophical and ethical thinking. In her book, ‘Conscience: The Origins of Moral Intuition,’ Churchland makes the case that the combination of neuroscience, evolution and biology are essential for understanding moral decision-making and how we behave in social environments. “In the past philosophers thought it was impossible that neuroscience would ever be able to tell us anything about the nature of the self or the nature of decision-making,” the author says. However, the way we reach moral conclusions has a lot more to do with our neural circuitry than we realize and as a consequence provides a bridge to singularity.

Will Singularity Arrive?

Technology forecasters and researchers disagree if or when singularity will arrive. Some argue that advances in AI will probably result in general reasoning systems that enhance human cognitive limitations way beyond the impact the potential arrival of singularity might have. Others believe that humans will modify their own biology to achieve radically greater intelligence. A number of studies regarding future developments focus on scenarios that combine these possibilities, suggesting that humans are likely to interface with computers, or upload and free-up their minds which results in a substantial amplification of their intelligence. Robin Hanson, Professor at George Mason University describes a hypothetical future scenario in which human brains are scanned and digitized, creating ‘uploads’ or digital versions of human consciousness which can be downloaded from the cloud to a next-generation individual. In his view the development of these uploads may precede or coincide with the emergence of superintelligence, implicating that singularity is about to arrive. Technologists including Paul Allen, Jeff Hawkins, Steven Pinker and Gordon Moore dispute the plausibility of a technological singularity and its associated intelligence explosion. One claim made was that the acceleration of intelligence is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies.

Conclusion

In the near future we should  know if singularity is still a valid concern or if we are indeed doomed to self-destruction as Vincent Verge proclaims. The exponential growth of AI, with its many positive and negative issues, has taken over our daily lives, creating a new paradigm of existence and intelligence. Hence, singularity as presented and discussed in this essay will not arrive. However, the term ‘singularity’ is likely to prevail, for example as singularity university, the singularity group or the title of this website.

Leave a Reply

Your email address will not be published. Required fields are marked *