AI in 2023 Picture Credit:bernardmarr.com
Introduction
It began three and a half billion years ago in a pool of muck, when a molecule made a copy of itself and became the ultimate ancestor of all earthly life. Four million years ago, brain volumes began climbing rapidly in the hominid line. Fifty thousand years ago Homo sapiens appeared. Then five hundred years ago the printing press was invented and just fifty years ago the first digital computer was built. The term Artificial Intelligence (AI), and the simulation of intelligence with computers or machines, was first defined in 1956. Mapping this timeline on a chart, one realizes that we are moving on an exponential trajectory with little knowledge as to the consequences of this journey. The following presents a selection of a few experts’ views as to what to expect in 2023 and beyond.
AI’s true goal may no longer be intelligence
In an article just published by ZDNET, contributing writer Tiernan Ray makes the point that some scholars of AI warn that the present technologies may never provide ‘true’ or ‘human intelligence’. Today AI has more capabilities than at any time since the term was first introduced by computer scientist John McCarthy sixty-six years ago. As one result, the application of AI is shifting its focus from intelligence to achievement. Yann LeCun, chief AI scientist at Facebook (now Meta), spoke at length with ZDNET about a paper he put out this summer on where AI needs to go. LeCun expressed concern that the dominant research of deep learning today – if it simply pursues its present course – will not achieve what he refers to as ‘true’ intelligence. This includes applications such as the ability for a computer system to plan a course of action using common sense. LeCun believes that without true intelligence, such programs will ultimately prove brittle, meaning that they could break before they ever do what we want them to do. At a talk in 2019 at the Institute of Advanced Study in Princeton, Demis Hassabis, co-founder of Alphabet’s Deep-Mind research unit and a scholar considerably younger than LeCun, noted that – like an idiot savant – many AI programs could only do one thing well. Deep-Mind, said Hassabis, is trying to develop a richer capability. “We are trying to find a meta-solution to solve broader problems,” he said.
Already in the late nineteen forties, Alan Turing anticipated this change in attitude. He predicted that ways of referring to computers and intelligence would shift in favour of accepting computer behavior as intelligent. “I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. “As the sincere question of intelligence fades, the empty rhetoric of intelligence is allowed to float freely in society to serve other agendas”, wrote Turing.
Will AI take over? Quantum theory suggests otherwise
In a 2020 article Will AI take over? Quantum theory suggests otherwise (theconversation.com), the question was raised if AI one day will surpass human thinking. The rapid progress of AI has raised concerns that its abilities will grow uncontrollably. As a result, AI could take over the world and wipe out humanity. One argument against an indefinitely growing intelligence is that its future outcome must be accurately predicted. Along this line of thinking quantum theory, one of modern science’s keyways of explaining the universe, states that predicting the future may not be possible because the universe is random. To solve a problem, it is pivotal to understand the current conditions, predict how the environment will evolve, and to anticipate the outcome of the actions that will be applied. Recent theories in physics suggest that the universe is extremely chaotic and random. This could imply that any sort of growing intelligence would eventually reach a point where it can no longer improve its predictions of the future and as result cannot further increase intelligence. There is no risk of a runaway AI, because physical laws of the universe pose some very constraining hard limits. However, there is an alternative perspective. What if humans perceive the universe as random and chaotic only because our cognitive and reasoning capabilities are too limited? We are aware of some of the limits of human understanding but, to paraphrase former US Defence Secretary Donald Rumsfeld, ‘We don’t know what we don’t know’. Taking this perspective, it may be possible that the universe is deterministic instead, and therefore fully predictable, but in an extremely complex way that we as humans cannot grasp. Albert Einstein argued that quantum theory was an incomplete description of the universe and that there must be hidden variables that we do not yet understand but which hold the key to determining future events.
We will see a completely new type of computer, says AI Pioneer Geoffrey Hinton
AI Pioneer Geoffrey Hinton, presenting the closing keynote at this year’s Neural Information Processing Systems conference (NeurIPS), said that the machine learning research community has been slow to realize the implications of deep learning and how computers are built. He thinks that we are going to see a completely different new type of computer. Originally computers were designed to faithfully implement instructions because it was assumed that the only way to get a general computer to solve a specific task was to tell it exactly what to do. In the future we will see a completely new type of computer that will take a different approach: It will be ‘neuromorphic’ and ‘mortal’, meaning that every computer will form a close bond with the neural network it represents, applying analog rather than digital components. We no longer separate hardware from software. We are going to apply mortal-computation, where the knowledge that the system has learned and the hardware used, are inseparable. However, these new mortal computers will not replace traditional digital computers, Hilton told the NeurIPS attendees. “It will not be the computer that is in charge of your bank account that knows exactly how much money you have got,” said Hinton. “It will be used for something else. For example, for putting GPT-3 in your toaster for just one US Dollar. As a result, running on a few watts, you can have a conversation with your toaster.” Following this predictive assessment, Hinton spent most of his time talking about a new approach to neural networks called a ‘forward-forward’ network, which does away with the technique of backpropagation currently used in almost all neural networks. By removing back-propagation, ‘forward-forward’ networks might more plausibly approximate what happens in the brain in real life. A draft paper of the ‘forward-forward’ work is posted on Hinton’s homepage (PDF) at the University of Toronto, where he is emeritus professor.
AI could have a twenty percent chance of sentience in 10 years, says David Chalmers
Philosopher Chalmers’s talk titled, “Could a large language model be conscious?” represented the opening keynote at this year’s NeurIPS conference. According to Chalmers the likelihood that today’s most sophisticated artificial intelligence programs are sentient, or conscious is less than 10 percent, but in a decade from now, the leading AI programs might have a 20 percent or better chance of being conscious. 2022 has been a year of claims about how GPT-3 and other large language models might have achieved consciousness or sentience. Following philosopher Thomas Nagel’s famous article “What is it like to be a bat?“, one conception of consciousness is that there is something like to be that being, and this being has subjective experiences, such as the experience of seeing, of feeling or thinking.’ “However, consciousness must be distinguished from intelligence. Consciousness is not the same as human-level intelligence,” Chalmers said. I think there is a consensus that many non-human animals are conscious, but their consciousness does not require human-level intelligence. Moreover, the impact of emotions as part of a conscious experience have to be considered as well. In the case of GPT-3 and other large language models, the software gives the appearance of coherent thinking and reasoning, with an impressive causal explanatory analysis when you ask these systems to explain things. That ability, said Chalmers, is ‘regarded as one of the central signs of consciousness’. Chalmers then outlined the reasons why an AI-program does not necessarily lead to biological embodiment and sensual experiences. More importantly, arguments for embodiment are not yet conclusive because the continual evolution of large language models means that they are just beginning to develop sensing abilities.
Conclusion
When it comes to raw computational power, machines are well on their way to provide tools making life easier for humans. So far no technical limits are visible that this exponential trajectory of innovation will be disrupted. The human brain is a magnificent object that is capable of being highly creative. From that point-of-view one can have serious doubts that AI will ever outsmart us. Following this line-of-thought the brain is capable of creating machines that, for better or worse, become smarter and more and more lifelike every day. In 2023 this trend is likely to continue with no end in sight.