AI Threat or Not? Credit: muscatdaily.com
Introduction
During a recent conversation with Senator Bernie Sanders at Georgetown University, Geoffrey Hinton, one of the three so-called ‘godfathers’ of AI, laid out all the alarming ways that AI will completely upend society. One of the reasons is that AI’s rapid deployment will be completely different from past technological revolutions, which created new classes of jobs, he said. “The people who lose their jobs will not have other jobs to go to,” Hinton said, as quoted by Business Insider. “If AI gets as smart as people or smarter, any job they might do can be done by AI.” Hinton pioneered the deep learning techniques that are foundational to the generative AI models fuelling the AI boom today. His work on neural networks earned him a Turing Award in 2018, alongside University of Montreal researcher Yoshua Bengio and Yann Le Cun, the former chief AI scientist at Meta.. All three scientists have been outspoken about the risks associated with AI. But it was Hinton who got plenty of attention by stating that he regretted his life’s work after stepping down from Google in 2023. Considering the potential positive impact of AI, he believes that we are not far away from achieving Artificial General Intelligence (AGI), a hypothetical AI system with human or superhuman levels of intelligence that is able to perform a vast array of tasks. “Until quite recently, I thought it was going to be like 20 to 50 years before we have reached AGI,” Hinton said in 2023. “And now I think it may be 20 years or less.”
Why Yann LeCun Leaves Meta
Last week, news broke that Yann LeCun, Turing Award winner and one of the pioneers of modern artificial intelligence, is stepping down from his position as Meta’s chief AI scientist by the end of the year. LeCun will be starting a new AI company with details to be announced later. This information confirmed previous rumours that LeCun was sidelined after Meta CEO Mark Zuckerberg assembled his new Superintelligence Lab, headed by Alexander Wang, co-founder and former CEO of Scale AI. LeCun had made it clear long before his departure that he was not satisfied with the direction Meta’s AI community was heading. While most efforts in today’s AI are focused on Large Language Models (LLMs), LeCun has been very vocal about their limitations, particularly in their ability to solve real-world problems. But which direction will LeCun be going after his departure from Meta? In a post on LinkedIn in which he confirmed his departure, LeCun wrote: “I am creating a startup company to continue the Advanced Machine Intelligence (AMI) research program I have been pursuing over the last several years with colleagues at New York University. The goal of the startup is to bring about the next big revolution in AI: systems that understand the physical world, have persistent memory, can reason and can plan complex action sequences.” LeCun has been consistent on how he believes we will achieve these goals. He has been a long-time advocate of self-supervised learning, and in recent years, he has been working on ‘world models’ that can be trained through self-supervised learning which train themselves without the need for data that is labelled by humans.
LeCun’s New Venture
A Meta spokesperson recently confirmed that AI legend Yann LeCun is leaving Meta and striking out on his own. In LeCun’s view, his new endeavour is meant to bring about the next big revolution in AI with systems that understand the physical world, have persistent memory and can reason. LeCun is fascinated by a segment of AI called ‘world models’. He has spent more than a year stating that LLM research, the backbone of systems like ChatGPT, is no longer a worthy area of pursuit, at least in respect to hypothetical advances in AI such as Artificial General Intelligence (AGI) or ‘Superintelligence’. After the 2022 release of ChatGPT which led to AI’s domination of many problem-solving activities in the tech world, LeCun became notable for his outspoken scepticism about the widely discussed need for providing AI safety. He told the Wall Street Journal last year that the idea that AI poses a threat to humanity is ‘complete Bull Shit (BS)’. LeCun believes AI models are needed that can comprehensively understand the physical world through sensory inputs like vision, and how to reason based on interactions and changes to that world. He thinks that current AI systems are far away from performing these tasks, and that they are in fact ‘dumber than cats’.
The News Magazine Bloomberg writes that Meta plans to partner with LeCun’s startup, though details are still being finalized. In a memo to his followers LeCun wrote that his former company will be a partner in the new company and will have access to its innovations. LeCun strongly prefers to focus on AMI instead of AGI, which many AI researchers consider the ‘Holy Grail of AI. In his memo, he reportedly wrote that by pursuing the goal of AMI to reduce the limits of present-day AI with an independent entity as realized by his startup, is a way to maximize its broad impact. Hence, designed to become an intelligent entity, the startup will support LeCun’s ambitious goals.
Gary Marcus and his Critique of LeCun
Gary Marcus in his first 1992 publication began criticising traditional neural networks as proposed by Yann Le Cun and others, calling for hybrid neural symbolic architectures. Taking a strong position and advocating neural symbolic cognitive models in his 2001 book ‘The Algebraic Mind’, he anticipated current troubles with hallucinations and unreliable reasoning. Moreover, he first warned how these limits would apply to LLMs in 2019, emphasizing their lack of stable world models.
According to Gary Marcus, Yann LeCun, with the support of Meta and the unwitting assistance of the press, has more than a decade run one of the most successful Public Relations (PR) campaigns, persistently allowing himself to be described as the inventor of ideas, arguments and techniques which in fact were not invented by him. The PR campaign which is connected to the new startup LeCun is launching, declares LeCun to be a genius, a glorification which, according to Marcus, is totally exaggerated. However, LeCun is primarily known for his expertise in Convolutional Neural Networks (CNNs), his critique of LLMs as well as the scaling hypothesis, his advocacy of commonsense and reasoning, and his advocacy of world models. CNNs represent without a doubt, a significant contribution to today’s AI. They are applied in image recognition, speech recognition, natural language processing, recommendation systems and many others. Until LLMs became dominant for solving complex problems, they were one of the leading techniques used in machine learning. And there is no doubt that LeCun played a major role in developing CNNs. Another common argument stated by LeCun in recent years is to say that LLMs lack common sense and are poor at physical reasoning. According to Marcus, for the last few years LeCun hardly seemed to emphasize the problem of LLMs. For example, in his paper on deep learning published in 2015 by Nature Magazine, common sense is only mentioned once, and the paper got zero citations. Yann LeCun has, without a doubt, made genuine contributions to AI, but the myths about the originality of his thought simply are not true. Whether he can produce genuinely original ideas with his new startup, remains to be seen.
Conclusion
Throughout history, many of humanity’s technological advances have generated confusion and fear during the period of adaptation and change. The rise of ChatGPT and similar AI systems has been accompanied by a sharp increase in anxiety about AI’s consequences. Worries peaked in May 2023 when the nonprofit research organization Center for AI Safety released the statement that mitigating the risk of extinction by AI should be a global priority alongside other societal risks, such as pandemics and nuclear war. The statement was signed by many key players including Geoffry Hinton and Yoshua Bengio, two so-called ‘Godfathers’ of AI. The biggest threat, however, is the longer-term problem with the introduction of something radical as ‘Superintelligence’ and failing to align it with human values and intentions. Regardless of the approach well-known experts like Yann LeCun suggest for solving today’s most pressing problems, it seems unlikely that the ‘doom scenario’ Geoffry Hinton suggests will prevail.