Does AI Reduce Our Cognitive Capacity? If So How To Prevent It

Posted by Peter Rudin on 20. March 2026 in Essay

Dumb with AI      Credit: www.linkedin.com

Introduction

In our busy technical world, tools like ChatGPT, DeepSeek and Gemini have become part of everyday life. But do we really know how they work? At its core, ChatGPT is trained on massive amounts of text from various sources. When you provide a clear input, it draws on that knowledge represented by data to generate a detailed and meaningful answer. Artificial Intelligence is one of the most powerful tools available and many of us start and end our day with AI’s help, whether that means generating a morning routine, whipping up a recipe, powering through a presentation or even transforming a photo into art. But the more we let AI do the thinking, the less we use our intelligence. Over time, this can make our problem-solving and creative thinking skills weaker. It is  like a muscle: if we do not use it, we lose it. AI is helpful but it should guide us, not think for us. The real question is not whether we should use AI but how we use it. When used wisely, AI can make us more productive, creative and informed. But if we rely on it for everything, we risk losing our skills and thinking power.

Is AI dulling our Minds?

A recent MIT Media Lab study reported that excessive reliance on AI-driven solutions may contribute to ‘cognitive atrophy’ and the shrinking of critical thinking abilities. The study is small and is not peer-reviewed, and yet it delivers a warning that even artificial intelligence experts are willing to acknowledge. When the researchers asked ChatGPT whether AI can make us dumber or smarter, it answered that it depends on how we engage with it: as a crutch or a tool for growth. While AI excels in data processing and statistics, it lacks the ability to create truly innovative and creative solutions. Machines calculate but they lack human experiences. Although AI machines work on sophisticated statistics, advanced mathematics and use very fast electronic chips that operate at mind-boggling computational speeds, they rely on data that has been created by humans and that data is the same across different AI platforms. When you ask a question to different AI platforms, most of the time their answers are very similar because the database they provide is the same. AI can tell you how to put things together, but AI would not be able to help you build a device that relates to a specific human context and experience. Machine learning depends on statistical adjustments, whereas humans self-organize life in relation to meaning. Today, at least, it is difficult to imagine that AI can have reflective thinking, and we have to be careful to think that AI is going to solve our problems. Human challenges are complex and can be solved only by humans. If AI technology is truly making us cleverer – turning us into efficient, information-processing machines – why are we feeling dumb most of the time?

Does AI cause Brain Rot ?

Two years ago, ‘brain rot’ was named Oxford University Press’s word of the year, a term that captures both the specific feeling of mindlessness that occurs when we spend too much time scrolling through rubbish online and the corrosive, aggressively dumb content which is provided by the use of AI. When we use our phones we have, in theory, most of the world’s accumulated knowledge at our fingertips, so why do we spend so much time dragging our eyeballs over bad content? One issue is that our digital devices have not been designed to help us think more efficiently and clearly almost everything we encounter online has been designed to capture and monetise our attention. Each time you reach for your phone with the intention of completing a simple, discrete, potentially self-improving task, such as checking the news, our  primitive hunter-gatherer brain confronts a multibillion-dollar tech industry devoted to signing-off and holding your attention, no matter what. Until recently you could only outsource remembering and some data processing to technology but  now you can outsource thinking itself. Given that we spend most of our lives feeling overstimulated and frazzled, it is no surprise that so many have jumped at the chance to let a computer do things we would have once done for ourselves such as write work reports or emails or plan a holiday. As we transition from the internet era to the AI era, we are consuming more low-value, ultra-processed information which has been predigested by others and is delivered in a way designed to bypass important human functions, such as assessing, filtering and summarising information. Being able to Google something in order to get the best answer is not knowledge but having knowledge is incredibly important because when you hear something that is questionable or maybe fake, that contradicts all the knowledge you  have accumulated you should become alert. No wonder there are a bunch of dumb people who think that the Earth is flat. If you read a flat Earth blog, you think, ‘Ah, that makes a lot of sense’ because you lack the understanding and knowledge to judge such a statement. The internet is already awash with conspiracy and misinformation, something that will only become worse as AI hallucinates and produces plausible fakes while young people are often  poorly equipped and naïve  to realize fake information.

The Business Risks

As the business world comes to grips with artificial intelligence, the biggest risk may be one whereby those running the economy cannot possibly stay ahead, keeping up with the speed of change. As AI systems become more complex humans are not able to fully understand, predict or control them. That inability to fundamentally understand where AI models are going in the coming years, makes it harder for organizations deploying AI and to anticipate risks and apply guardrails. As organizations connect AI systems to real-world business operations to approve transactions, to write code, to interact with customers and move data between platforms, they are encountering a growing gap between how they expect these systems to behave and how they actually perform once deployed. They are quickly discovering that AI is not  dangerous because it is autonomous but because it increases system complexity beyond human comprehension. These failures highlight the fact that problems do not necessarily come from dramatic technical breakdowns but from ordinary situations interacting with automated decisions in ways humans did not foresee. As organizations begin to trust AI systems to handle difficult decisions, experts agree that companies will need ways to quickly intervene when systems behave unexpectedly.  Stopping an AI system, however, is not as simple as shutting down a single application. With agents connected to financial platforms, customer data, internal software and external tools, intervention may require halting multiple workflows simultaneously, according to AI operations experts. You need a kill switch, and you need someone who knows how to use it. The CIO and his management team should know where that kill switch is, in order to react if the system begins to act irrationally. Better algorithms will not solve the problem and to avoid failure requires organizations to build operational controls, oversight mechanisms and clear decision boundaries around AI systems. “People have too much confidence in these systems,” says Mitchell Amador, CEO of crowdsourced security platform  ‘Immunefi’. AI systems are not completely reliable, and organizations need to build this knowledge into the design of the system they are using. If they do not follow this advice, they are heading for disaster, but unfortunately many decision makers are unwilling to accept this.

Conclusion

Balancing speed of deployment with the risk of losing control is a critical issue. There is pressure among leaders of AI operations to move really quickly, yet they are also challenged not to prevent  experimentation, because that is the foundation of learning. Even as risks grow, expectations for the technology continue to rise. These technologies are faster than any human will ever be capable of duplicating and in five or ten years, we will be in a situation where AI is fundamentally more intelligent to solve a given problem. In the meantime, there will be a lot of learning moments and the organizations that are going to mature the fastest are going to be the ones that do not avoid failure but learn to manage it.

Leave a Reply

Your email address will not be published. Required fields are marked *