What is AGI Credit:nationainews
Introduction
Hoping for exponential gains, the owners managing venture capital continue to invest billions of US dollars into artificial intelligence development. However, the research community is not convinced, claiming that the tech industry is approaching a ‘Dead End’. In the US, technology companies like OpenAI and Anthropic expect success by focusing on so-called ‘scalable AI’, a development approach that prioritizes rapid financial growth over useful technology. For some researchers Artificial General Intelligence (AGI), considered the ‘Holy Grail of AI’ that scientists have been dreaming of for decades, provides the path for managing the growing complexity of AI applications.
Have we reached the Peak of AI?
Not long ago the US economy experienced an incredible boom in the technology sector. The country’s GDP is growing better than expected. Microsoft just became the second USD 4 trillion company in history, just weeks after Nvidia became the first. However, something interesting is happening. If one takes Artificial Intelligence away from the economy you find a country that has been in the struggle of stagnation. By stripping away the effects of an ever-growing AI hype, one can observe that the record-breaking numbers mask a dark storm swirling just below the surface. Although Wall Street appears to be doing great on paper, there are really just a tiny handful of technology companies running the show. A recent analysis by the Financial Times noted that, though stocks in most sectors did very well, actual profits are falling, a red flag for the ‘so-called’ magnificent seven, represented by Nvidia, Amazon, Google, Tesla, Microsoft, Apple and Meta. Without these giants the actual performance of non-technology companies has been pretty bad.
An analysis by CNBC claims that 26 percent of the stock market’s explosive growth over the last three months came from the lavish spending by the magnificent seven. Without their outrageous money moves, the stock market would essentially stagnate. It is important to note that the stock market is not representative of the entire economy. When looking at small- and medium-sized businesses not reflected in the stock market, the Financial Times concludes that corporate profits hardly grew in the second quarter from the year before. We also see this in GDP growth – the total monetary value generated by a country- which grew 3% between Q1 and Q2. Of that 3 %, however, the economist Paul Kedrosky estimates that a blistering 40 percent came from massive spending on AI which, as he points out, has yet to actually make anyone any money. With these kinds of effects, Kedrosky argues that AI spending is basically a ‘massive private sector stimulus program.’ As if that was not enough, the little money we do take home is caught in an all-out tug-of-war bout between Donald Trump and the Federal Reserve, the agency that oversees lending rates. The Federal Reserve is currently locked in a ‘damned if you do, damned if you don’t’ scenario as far as the control of inflation is concerned and at a time when Trump’s historically high tariffs are expected to send prices of goods and services to the moon. Hence, the AI boom has become both a lifeline and a smokescreen, breathing temporary life into the market while hiding the problems that are brewing. We have reached a peak where the gains expected in 2025 are largely inaccessible to everyday Americans struggling with fewer job opportunities, stagnant wages, and rising costs.
The Critical Report of AI Researchers
Published in March 2025, this new report, which documents the findings of a survey that queried 475 AI researchers, offers a resounding rebuff to the technology industry’s long-preferred method of achieving AI gains by providing so-called ’generative models’. As a result, large data centers with an increasing number of processors are required for training these models. “The vast investments in scaling these models, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced,” Stuart Russel, a computer scientist at UC Berkeley who helped organize the report, told New Scientist. I think that, about a year ago, it started to become obvious to everyone that the benefits of scaling in the conventional sense had plateaued because the energy demand is just as staggering. Microsoft signed a deal to fire up an entire nuclear power plant just to power its data centers, with its rivals Google and Amazon also penning splashy nuclear energy deals. The premise that AI could be indefinitely improved by scaling was always on shaky ground. Case in point, the technology sector’s recent existential crisis precipitated by the Chinese startup DeepSeek, whose AI model could well match the US flagship, multibillion-dollar chatbots at purportedly a fraction of the training cost and power. Of course, the writing had been on the wall before that. In November last year, reports indicated that OpenAI researchers discovered that the upcoming version of its GPT large language model displayed significantly less improvement, and in some cases, no improvements at all than previous versions did over their predecessors. Given that Artificial General Intelligence (AGI) is what AI developers all claim to be their end game, it is safe to say that scaling for problem solving is widely seen as a dead end.
The Reality and Myth of AGI
The idea of creating an artificial mind that can rival or exceed human intelligence persisted for a long time. Some of the earliest examples can be found in ancient myths and legends, such as the golems of Jewish folklore, the automata of Greek mythology or the mechanical men of Hindu epics. In modern times, the concept of Artificial General Intelligence (AGI) has been discussed by many thinkers and writers, such as Alan Turing, John von Neumann, Isaac Asimov, Ray Kurzweil, Nick Bostrom and many others. Artificial Intelligence (AI) has contributed heavily to solving specific problems, but we are still far away from AGI, considered the ‘Holy Grail of AI’ that scientists have been dreaming of for decades. Already in 2021, OpenAI CEO Sam Altman theorized that the payoff should come from near-exponential improvements to AI’s capabilities. If we spend more money, there is no reason the technology cannot get better. And if you spend enough money, you might just be able to unlock AGI, the point where in his view, our chatbots achieve human-level intelligence. However, the fact is, that this approach to technology has shown little success. Although the field of AI has long pursued the kinds of general purpose, human-level abilities captured by the term AGI, the rise of more general capabilities of neural net models has stimulated discussions about directions forward, implications around success and serious doubts which to some less critical observers appears to be within reach. Reaching AGI may require integration of human-like capture and context-sensitive recall, including some kind of structured, episodic memory. Efforts are underway to supplement Large Language Models (LLMs) with memory mechanisms provided by graphic processors. While AI models can detect correlations in vast datasets, they struggle with causal inference and counterfactual reasoning. Understanding cause-and effect relationships is essential for robust decision-making and scientific discovery. Causal reasoning with large language models is an important research direction as human intelligence develops through rich sensorimotor interactions with the world. Current multimodal models seem to lack a deep understanding of physical reality and struggle to sense, reason and interact effectively in real-world environments. As it appears, AI researchers have still a long way to go before AGI is reached and to some this quest leads to a dead-end, regardless of how much money is provided.
Conclusion
The current AI landscape is increasingly shaped by economic interests and competing approaches to governance. While AGI is a fascinating and controversial topic, there is a gap between the myth and the reality of AGI. The myth of AGI is based on several assumptions and arguments that may not hold true. However, the reality of AGI is much more complex and compared to its myth loaded with questions and challenges that need to be addressed before AGI can become a reality. The future of AGI is hard to predict, but it is likely to have profound implications for humanity and society. Depending on how we approach and manage the development and deployment of AGI, it could be a source of great benefit or great harm, or both.
Hello Peter,
your impressive, excellent essay series provides much value in the complex context, many thanks.
This essay helps also better weight own observation. Just three items merge into my thoughts where
I recognize borders, dead-end.
1) AI implementation conflicts, corrupts established values, metrics: Google Pixel 10 Pro’s AI large Zoom generates part of the zoomed surface via AI (may hide or invent object/detail, hallucination). Using Pixel 9 Pro XL also with 16GB RAM and see no need to upgrade for this mega zoom feature.
2) LLM are mostly based on writings which may be wrong but already accepted (history: ‘the winner writes it all’) or more recent TRUMP’s assaults to distort facts. But more important (process of causal reasoning part of) INTUITION is not documented which to my experience is the vast base of humans important, complex decisions is also time bound, hence can’t really be learned.
3) Stock market: besides your outlined ‘fictive’ growth via the US giants massive investment in AI and Cloud Infrastructure there is the very disturbing and erratic power misuse by Trump with tariffs (e.g. Switzerland 39%, why?). Actually observing market mainly via Chinese stocks where I assume capital may continue to move (BABA, HESAI etc.).
Pretty mad Zeitgeist, where coming AI peak/burst for the giant companies is an option (could be initiated via one event, then snowball effect, today expression ‘going viral’ on stock markets, panic mode).
I believe the major border for AGI is my second point.
best greetings and again many thanks how you lead via your essays, now since close to ten years, the complex key topic.
Hannes