AI and our Capacity to Think about Thinking

Posted by Peter Rudin on 6. November 2020 in Essay

Auguste Rodin Le Penseur 1882    Credit: Wikimedia


The purpose of thinking is to understand our world as best as possible. Our minds have evolved to think so that we can better adapt to our environment and make smarter decisions on how to survive, live, and flourish. At a biological level, our thoughts are millions of neurons firing off in our brains. These brain cells working together create concepts, language, and knowledge that arises in our consciousness. Our thinking reflects our view of reality. The more accurate our view of reality, the better we can adapt to our surroundings. The function of our thinking is to make decisions that eventually guide our behaviour. Our view of reality is therefore instrumental to how we act and respond to the world. Thinking is one of the key reasons we have evolved so effectively as a species.

Metacognition: Thinking about Thinking

Metacognition refers to ‘thinking about thinking’ and was introduced as a concept by the American psychologist  John Flavell in 1979. He suggests that metacognition is the knowledge we have of our own cognitive processes (our thinking). It is our ability to control our thinking processes through various strategies, such as organizing, monitoring, and adapting. Metacognition is considered a critical component of successful learning. It involves self-regulation and self-reflection of strengths, weaknesses, and the types of strategies one creates. It is a necessary foundation in culturally intelligent leadership because it underlines how one thinks through a problem or situation and the strategies one creates to address it. Metacognologists believe that the ability to consciously think about thinking is unique to sapient species and indeed is one of the definitions of sapience or wisdom. According to Flavell, Metacognition is classified into three components:

  1. Metacognitive knowledge (also called metacognitive awareness) is what individuals know about themselves and others as cognitive processors.
  2. Metacognitive regulation is the regulation of cognition and learning experiences through a set of activities that help people control their learning.
  3. Metacognitive experiences are those experiences that have something to do with the current, on-going cognitive endeavour.

As a result of the ongoing progress in neuroscience to better understand the functionality of the human brain, the traditional distinction between cognition and emotion has come under scrutiny as  thinking about thinking is gradually encompassing all aspects of human thought.

Combining Emotion and Cognition

The relationship between cognition and emotion has attracted the interest of philosophers and scientists for centuries. Starting with Thomas Aquinas (1225- 1274) who divided the study of behaviour into two broad categories – cognition and affect – cognition and emotion were viewed as separate systems and processes that seldom interact with each other. In the last hundred years, the approach of functional localization within the brain has also shaped our conceptual framework that separates the emotional brain from the cognitive brain. However, behavioural and neuroscientific data of the past two decades have demonstrated that the notion of functional brain specialization is problematic. The brain regions traditionally viewed as affective regions are involved in cognitive processes, and the brain regions generally viewed as cognitive regions are also involved in affective processes. Increasingly more researchers have realized that the processes of cognition and emotion not only interact, but that their neural mechanisms are integrated in the brain so that they jointly contribute to behaviour. In his 1995 book Descartes’ Error, neuroscientist Antonio Damasio takes the position that our neurobiology does not confirm the distinction between reason and emotion. Instead, they are inseparably linked in a way that reflects our complexity as human beings.

AI for Emotional and Cognitive Intelligence

Just as research in psychology has separated emotion from cognition in the last centuries, computer science had also neglected affective factors in intelligent machines for a long time. In 1985, AI-Pioneer Marvin Minsky pointed out that the question is not whether intelligent machines can have any emotions, but whether machines can be intelligent without emotions. Emotional intelligence refers to the ability to recognize emotion and is an aspect of human intelligence. However, there has been little serious research on emotional intelligence in computing until 1997, when MIT Professor  Rosalind Picard’s book ‘Affective Computing’ was published. Affective computing refers to computing that relates to, arises from, or deliberately influences emotions, especially in human-computer interaction. Affective computing aims to create a computing system with the ability to recognize, understand, express and respond to human emotion in order to improve human affective experiences with technology. While still in its infancy , the combination of cognitive and emotional intelligence represented as 2-D, screen-based interactive Avatar, is likely to enhance or replace voice-based BOTS currently used as a replacement for FAQs (frequently ask questions). Personalized education, supported by highly competent, human-like Avatars, will eventually challenge current on-line learning concepts, disrupting conventional University teaching models. Spending USD 80’000.- for a prestigious university degree might soon be a thing of the past.

Thought Models as drivers for future AI-Development

As neuroscience and related behavioural analysis are accelerating intelligence research, the functionality of the human brain is increasingly seen as a guide in advancing AI towards human-like intelligence. Along this process the human capacity to think and to rethink,  a variety of

thinking processes, such as ‘critical thinking’, ‘decision thinking’ or ‘design  thinking’ have become subject of  AI-research programs. Being confronted with the growing complexity of handling our day-to-day life, there are a number of thought models which allow us to safely and efficiently act in the real world. One prominent example is the Dual Process Theory popularised by Daniel Kahneman’s book ‘Thinking Fast and Slow’. The Dual Process Theory postulates that human thought arises as a result of two interacting thought processes: an unconscious, intuitive response  – dubbed System 1 – followed by a much more reflective reasoning response – dubbed System 2. Our ability to assess the quality of our own thinking – our capacity for metacognition –plays a central role. If we accept that the Dual Process Theory plays a pivotal part in our own interactions with the world, the notion of exploring a similar approach for robot design, for example, is a fascinating prospect towards realising robust, versatile and safe embodied AI-applications.

Robot design based on the Dual Process Theory with Neuromorphic Hardware

One of the interesting aspects of a Dual Process Theory for robots is the fact that – given the analogy holds – metacognition finds a natural place in this construct: it bridges the two systems by regulating the intuitive, almost involuntary response of System 1 with a supervisory, more deliberate one of System 2 based on the robots visual- and touch-sensing feed-back ability. By reviewing the result of the previous action, the robot improves the knowledge capacity for future applications. Controlling a robot in a dynamic environment takes a significant amount of computer processing power. This fact recently prompted a research-team from the University of Southern California to resurrect an idea dating back more than 40 years: mimicking the human brain’s division of labour between two complimentary structures. While the cerebrum region of the brain is responsible for higher cognitive functions like vision, hearing and thinking, the cerebellum region integrates sensory data and governs movement, balance, and posture. In a paper recently published in Science Robotics, the researchers describe a hybrid system based on so-called memristor hardware, combining analog circuits that control motion with digital circuits that govern perception and decision-making in a robot designed to balance the motion of a pendulum. Contrary to the conventional Von Neumann computer-architecture, memristors combine computing and memory in one place, hence avoiding the time delay caused by transferring data between memory and processor. Similar to how biological neurons operate, one key advantage of neuromorphic hardware is that the signals from the sensors are analog, so it does away with the need for extra circuitry to convert them into digital signals, saving both space and power. More importantly, the analog system is an order of magnitude faster and more energy-efficient than a conventional all-digital system.

Brain-Focused AI and the future of Philosophy

Ever since the rise of modern neuroscience, there has been a controversial discussion about its potential influence on topics that were traditionally seen as part of the domain of social sciences and humanities. In philosophy, two distinct ways of dealing with the problems and prospects of neuroscience have been developed: While the philosophy of neuroscience tries to apply methods and classical approaches from the philosophy of science to neuroscience, so-called Neurophilosophy takes a different approach by applying neuroscientific findings to classical philosophical issues. The separation of mind and body, a widely accepted theory from the 17th-century French philosopher René Descartes, defined dualism as the prevailing thought model.  Since the weight of evidence indicates that all mental processes occur in the  brain, this classical mind/body separation has been replaced with a range of questions such as what brain mechanisms explain learning, decision making, self-deception, and so on. Patricia Churchland, who teaches philosophy at the University of California, San Diego is a key figure in the field of neurophilosophy, applying a multidisciplinary approach on contemplating how neurobiology contributes to philosophical and ethical thinking. In her book, ‘Conscience: The Origins of Moral Intuition,’ Churchland makes the case that neuroscience, evolution and biology are essential for understanding moral decision-making and how we behave in social environments. In the past, “philosophers thought it was impossible that neuroscience would ever be able to tell us anything about the nature of the self or the nature of decision-making,” the author says. “However, the way we reach moral conclusions has a lot more to do with our neural circuitry than we realize. We are fundamentally hard-wired to form attachments, for instance, which greatly influence our moral decision-making. Also, our brains are constantly using reinforcement learning — observing consequences after specific actions and adjusting our behaviour accordingly”.


The history of science implicates a gradual process whereby speculative philosophy cedes to increasingly well-grounded experimental disciplines as experienced in physics, chemistry, biology, astronomy  and more recently in neuroscience. With ongoing research showing that our individual neuro-architecture is heavily influenced by genetics and  40 to 50 percent heritable, culture can evolve even if intelligence does not. Humans in ancient times lacked smartphones and spaceflight, but we know from studying philosophers such as Buddha and Aristotle that they were just as clever. Our brains did not change, our culture did.

Leave a Reply

Your email address will not be published. Required fields are marked *