Humans and their tools Credit:history.com
As of March 22nd, 2023, more than 1’800 signatories, including Elon Musk and cognitive scientist Gary Marcus, called for a six-month pause in the development of systems ‘more powerful’ than that of GPT-4. The Future of Life institute – the thinktank that coordinated the effort – suggests that AI labs should use this pause to jointly develop and implement a set of shared safety protocols. Confidence in the positive value of AI has been shaken as the desire for truth and reality might signal a reassessment of AI’s value-proposition.
Is The Six-Month Pause Overhyped?
Two prominent figures in the artificial intelligence (AI) industry, Yann LeCun, the chief AI scientist at Meta, and Andrew Ng, the founder of Deep Learning, argued against a proposed pause on the development of more powerful AI-systems. During an online discussion Ng and LeCun agreed that some regulation was necessary, but not at the expense of research and innovation. They argued that a pause on deploying new AI-models was counterproductive. They also called for more collaboration and transparency among researchers, governments and corporations to ensure the ethical use of these models. Ng explained that the field of AI had seen remarkable advances in recent decades, especially in the last few years. Deep learning techniques enabled the creation of generative AI-systems that can produce realistic texts, images and sounds. He also remarked that the call for a six-month pause beyond GPT-4 was a terrible idea. However, we must balance the huge value AI is creating against its realistic risks. LeCun said that there are probably several motivations that drive individuals to sign this letter. Some of them are perhaps worried that Artificial General Intelligence (AGI) might eliminate humanity at short notice. But why slow down the progress of knowledge and science? I do not see the point of regulating research and development. I do not think that this serves any purpose other than reducing the knowledge that we could use to make technology better and safer.
Is Life Analog or Digital?
The essence of life is information, but information is not synonymous with life. To be alive, a system must not only hold information but process and use it. It is the active use of information, and not the passive storage, that constitutes life. The two principles of processing information are ‘analog’ and ‘digital’. Digital computers only deal with zeroes and ones. Analog computers process a linear wave equation. An LP record gives us music in analog form, a CD gives us music in digital form. Are humans analog or digital? The information could be found in two places: in our genes and in our brains. The information in our genes is certainly digital, coded in the four-level alphabet of DNA. The information in our brains is still a great mystery. We do not know how the human capacity to memorize really works. It seems likely that memories are recorded in variations of the strengths of synapses connecting the billions of neurons in the brain with one another, but we do not know how the strengths of synapses are varied. It could well turn out that the processing of information in our brains is partly digital and partly analog. We cannot expect that analog life has a better chance of surviving than digital life unless – sometimes in the future – we might upload ourselves to the cloud rather than download ourselves to silicon chips in a computer center. Up- and downloading from the cloud as a means for survival seems to be the more realistic option to save reality as the foundation of human existence.
The relationship between Trust and Reality
Truth is related to trust while knowing is based on experience. Trust comes from the repeated realization that our exchange and communication with one another is truthful. Without truth there is no trust and truth cannot be established unless we have a chance to test it. But what is reality? Is it a simulation created by our brain or just a dream? We must engage philosophy and neuroscience to get an answer to this question. Neuroscience tells us that our brain creates our reality. It takes input from our senses and creates a model of the world we experience. This model is what we call ‘Reality’. Philosophy tells us that reality is something that exists independent of our minds. It exists outside our brains and can be defined through reason and observation. Is reality something that is created by our brain or is it something that exists independently of our mind? From neuroscience we know that our brain creates reality. Input is provided from our senses to create a model of the world which matches our experience. But despite this new understanding we still do not know much about reality. As our world changes, the definition of reality changes with it. In his new book, ‘Incognito: The Secret Lives of the Brain,’ Stanford Professor David Eagleman, a well-known neuroscientist and expert on consciousness, argues that there is no single reality. Instead, he says, everyone creates his/her own reality: “Each person constructs such a map of reality… What we take to be objective reality is really just a consensus hallucination… The reality outside our skulls is an illusion created by our brains.”
The bizarre science behind how our brains shape reality
In an article published in March 2023 The bizarre science behind how our brains shape reality | BBC Science Focus Magazine , Professor Lisa Feldmann-Barrett raises the following question: Do we see the world as it really is or are we creating our own reality? Answering engages the unique capacity of the human brain to self-reflect and to fantasize. We can consider a physical object such as a fish and use our imagination to describe new functions that are not part of its physical nature. From research we know that our capacity to communicate this imaginary process with other people via internet has a significant impact on our daily lives. But how do human brains create this new reality? Our brain constantly receives data from our eyes, ears, nose, and other sense organs. It also receives a continuous stream of sensing data from inside our body as our lungs expand, our heart beats or our temperature changes. Our brain must somehow guess the causes of this data so it can keep us alive and well. Fortunately, our brain has an additional source of information that can help to perform this task: memory. Our brain can draw on the lifetime of past experiences -some of which are similar to the present experience – to guess the meaning of the sensed data. Our brain’s best guess manifests itself as our action and everything we see, hear, smell, taste and feel at that very moment. And this turbulent process of mental activity all happens in the blink of an eye, completely outside of our awareness. While similar experiences can be envisioned, the fact is that the brain does not need an exact match. When our brain makes guesses about the sensed data around and within us, those guesses often form an abstract category based on function. Abstract categories are the engine that creates social reality. This social reality is so powerful that it has influenced our genetic evolution as a species. Hence, we might have more control over reality than we think and more responsibility for reality than we might realise or want.
In an open letter slowing-down-development-of-ai-systems-passing-the-turing-test/ Joshua Bengio, co-signer of the letter published by the Future of Life Institute, makes the point that the consequences of the short- and medium-term risks, especially the manipulation of public opinion and massive disinformation, are easy to predict. The longer-term risks such as AI-systems that are harmful despite the programmers’ objectives to write non-biased code, require further studies. “There is an urgent need to regulate these systems by aiming for more transparency and oversight to protect society. I believe, as many do, that the risks and uncertainty have reached a level that requires an acceleration in the development of our governance mechanisms”, he states. However, there is no guarantee that this will raise the necessary trust for the advancement of digital technologies. Making reality the key benchmark, we should reassess AI’s value as to what it is: a powerful tool but not more.