Singularity arrived Credit: Medium
Introduction
Julie Hook, writing for Medium in December 2025, suggested that declining transparency shifts uncertainty and cognitive burden onto humans. This is an assumption embedded in most conversations about AI. As AI systems become more capable, they are expected to become easier to understand. Better tools should come with clearer explanations, stronger evaluation and more reliable oversight. The emerging evidence suggests exactly the opposite. As systems become more embedded in research, decision-making and everyday work, the output is becoming more difficult to interpret. This opacity is not only technical. It is the result of fostering the difficulty to understand it because facts have been intentionally kept secret or made purposely more complicated. The result affects organizational, procedural and cognitive issues and this is not simply a governance challenge. It is a human one and many individuals experience this shift as recommendations that are provided without explanation about its purpose. A decision is approved without a clear sense of how it was generated. Yet, responsibility remains, but understanding feels less convincing than it once did.
What is Cognition
Cognition refers to the mental processes of acquiring knowledge and understanding through thought, experience and senses. It represents the foundation for understanding our environment, planning actions and forming beliefs, involving both conscious and unconscious activities. Key Cognitive Processes are:
Perception: Interpreting sensory information (sight, sound, touch) to understand the world.
Attention: Focusing mental resources on relevant stimuli.
Memory : Encoding, storing, and retrieving information .
Language: Understanding and producing speech and text.
Thinking & Reasoning: Analysing information, drawing inferences, and forming ideas.
Problem Solving: Finding solutions to challenges.
Decision Making: Choosing among different options.
Why the transfer of cognitive burden matters:
Interaction: It requires us to make sense of complex sensory input and respond appropriately.
Function: It is crucial for daily functioning, from basic awareness to complex reasoning.
Foundation: It underpins all learning, knowledge acquisition, and behaviour.
Automation is often described as a way to reduce human effort. Hence, responses of AI systems are interpretable, individuals can recognize anomalies and understand when judgement is required. When that interpretability declines, individuals are asked to remain accountable for outcomes without full access to the reasoning that produced them. Cognitive burden is quietly transferred to individuals engaged in decision making with less active reasoning and more monitoring. Humans are positioned as backstops, expected to intervene when something goes wrong, yet given fewer signals about when intervention is warranted.
Transparency does not reduce Complexity
A persistent myth surrounding AI-assisted work is that reducing complexity automatically reduces cognitive load. Cognitive science suggests otherwise. When individuals cannot anticipate system behaviour, explain unexpected outputs, or evaluate the reliability of recommendations, the brain compensates by remaining alert as vigilance replaces confidence. Instead of freeing mental resources, trust becomes effortful rather than stabilizing. These responses are not signs of resistance or poor adaptation. They are predictable reactions to environments where feedback loops are incomplete and explanatory structures are thin.
Discussions on AI transparency often focus on what is visible. Below that surface lies a deeper layer of impact. Research on complex automated systems shows that the most consequential effects often emerge indirectly. They appear in how roles change, how responsibility shifts and how uncertainty is spread. When clarity declines, ambiguity does not vanish, as humans tend to get used to it. Over time, this reshapes how work feels. Decisions become harder to justify. Errors become harder to diagnose. Success and failure feel increasingly disconnected from human skill or judgment. The result is not simply inefficiency. It is an erosion of confidence, clarity and cognitive stability. Frontline workers are asked to stand behind decisions generated elsewhere. Managers approve processes they cannot fully explain, and leaders are asked to rely on performance indicators that obscure fragility.
When failures occur, attention turns quickly to responsibility rather than understanding. The question is raised who approved the decision rather than why the system behaved as it did. Capability without comprehension transfers the work of sense-making onto the functionality of the human brain. It asks people to operate in conditions where explanation is optional and accountability is not. Trust is encouraged while understanding is constrained. This is not an argument against AI nor a call to halt innovation. It is an observation about alignment. The capability of AI systems has advanced faster than our ability to comprehend and the resulting gap has not closed. The key question is whether AI will continue to progress, possibly in the direction of what has been referred to as ‘Singularity’.
Are we approaching Singularity?
In his latest book, ‘The Singularity is Nearer’ the well-known futurist and inventor Ray Kurzweil states that advancements in technology follow a distinct trajectory. Kurzweil explains that technological advancements, particularly in information technologies, are progressing at an increasingly rapid pace. He argues that swift advancements, propelled by the ‘Law of Accelerating Returns’, will result in an era where human intellect is outstripped by AI, causing profound transformations in the way humans live and how society is structured. In his view the pinnacle of technological advancement will lead to the digitization of human cognitive abilities, enabling a flawless fusion of organic and artificial intelligence. He argues that such progress stems from a deeper comprehension of the complex mechanisms governing biological neural systems, coupled with rapid progress in AI technology and the need for increased processing capabilities He emphasizes the crucial role that digital processing plays in replicating human cognitive functions. He has steadfastly held the view that duplicating the intricate operations of the human mind necessitates significant computational power, a stance he has clung to, despite disagreement of specialists in AI research. Kurzweil believes that the complexity inherent in information processing gives rise to consciousness, which in turn determines the profundity of our awareness. Strategies for the progression of AI include its exploration of rule-based system models as well as neural network approaches. In his opinion computers are designed to utilize rules and logic that reflect the problem-solving methods used by people who are experts in the field.
Components of a Neural Architecture
According to Kurzweil the architecture of our brain’s neural networks serves as the foundation for a network strategy and its definition of connectivity. The architecture of the network is crafted to identify complex patterns of data available, which might be difficult or impossible for humans to directly encode. Kurzweil highlights the groundbreaking work done by Frank Rosenblatt, who created the Perceptron, a rudimentary neural network designed with a solitary layer for recognizing printed characters. The initial promise of these connectionist approaches was clear, yet their broad implementation was limited by the significant demand for computational resources. Inspired by the architectural composition of biological brains, Kurzweil suggests that by gaining a more profound insight into the workings of biological neural networks, we can significantly accelerate the advancement of AI. Scientists are making progress in the development of artificial neural networks by examining the structure and functionality of brain regions like the cerebellum and the neocortex. Kurzweil describes the functions of the cerebellum within the brain, emphasizing its critical role in orchestrating movement and preserving acquired motor abilities. He describes its structure as being composed of numerous small and simple components arranged in a tiered fashion. This design, according to his argument, is meticulously structured to facilitate and execute a variety of distinct motor sequences, which allows for the execution of intricate activities such as signing an autograph with ease or catching a baseball instinctively. The cerebellum is also vital for animal behaviour, but its reliance on innate, fixed behaviours limits its ability to quickly adjust to new situations. In contrast the neocortex demonstrates a hierarchical structure that is adaptable.
Conclusion
Singularity represents a moment when AI will surpass human intelligence, leading to unprecedented changes in the fabric of human existence. This notion is grounded in the idea that AI systems will eventually reach a point where they can improve themselves autonomously, using recursive self-improvement to trigger an exponential growth in intelligence. Kurzweil says that the technology to do this will be available by 2040 which implies that humanity as we know it today will cease to exist. A fascinating concept but his prediction, as forecasted by him already in 2005, will fortunately not happen, because the concept of singularity is not sustainable.