With Self-Reflection to better Decisions, what about AI?

Posted by Peter Rudin on 21. May 2021 in Essay

Intelligent Assistants           Credit: www2.deloitte.com


According to Wikipedia self-reflection is a process of communicating internally with oneself, thinking about one’s own character or behavior, analysing the reasons that caused the behavior and what the outcome of the behavior implies. Self-reflection helps individuals in several ways. First, self-reflection fortifies an individual’s emotional stability and assists in building two parts of one’s emotional intelligence: self-awareness and self-concept. Self-awareness enables individuals to comprehend their feelings and recognize their effect on others. Self-concept includes the capacity to control or divert troublesome feelings and adjust to changing circumstances. Second, self-reflection enhances a person’s self-esteem and gives transparency for decision-making, effective communication, and building influence. Third, self-reflection can also be viewed in the context of Daniel Kahneman’s  ‘Thinking Fast and Slow’, with ‘Slow’ addressing human’s capacity to self-reflect and  ‘Fast’ raising the question as to how AI can support and improve decision-making beyond applying impulse and/or intuition under time-critical conditions.

From Self-Reflection to Self-Awareness

What distinguishes self-awareness from self-reflection is the use of reflective and evaluative processes based on individual experiences. These processes enable an individual to not only understand their own strengths and weaknesses but also understand how others perceive them. This two-component conceptualisation of self-awareness is outlined by social psychologist Prof. Roy Baumeister in his 2005 book ‘The Cultural Animal’, suggesting that self-awareness is about “Anticipating how others perceive you, evaluating yourself and your actions according to collective beliefs and values and caring about how others evaluate you.”  Self-awareness begins in childhood around six months after birth with the ability of experiencing oneself as an independent being, for example through the recognition in mirrors or the expression of self-conscious emotions like joy or sadness. From childhood on we are continuously experiencing change and new encounters. All of these expose us to a mindset of learning, raising our self-awareness through testing our strengths and weaknesses while fostering our beliefs, attitudes and emotions. Neuroscience and behavioral neuropsychology measure self-awareness in relation to various cognitive processes involved in the concept of self-knowledge, such as self-referencing ,self- representation and self-regulation. This research suggests that there is no specific ‘self-spot’ located in the brain. The individual’s sense of self is distributed throughout the brain with contributions from multiple sub-components within different regions of the brain including areas which are known for self-evaluative and self-reflective processes.

Opening a new Frontier in AI-Development

Continuous learning and decision-making are key activities performed by individuals and organizations are part of today’s socio-economic environment. For the past ten years AI-research has largely contributed to detecting patterns from huge data-pools generated as part of the digital transformation process. Many useful applications have emerged; however, it has become obvious that humans and AI-systems are increasingly overwhelmed with the complexity of problem-solving and decision-making. Biased data or the lack of human resources in dealing with this complexity are generally considered  one of the bottlenecks as to why AI has reached a tipping point. Moreover, the unresolved issues of handling common sense or semantic understanding indicates the need for a new direction which is more focused on the observation of human behavior. Stuart Russell’s best-selling 2019 book ‘“Human Compatible: Artificial Intelligence and the Problem of Control” sets a landmark along this new AI-approach. Sensing and recording of human activity as a learning process provides a means for developing an intelligent  ‘robot’ that mirrors our own life experiences, interacting with the outside world similar to the way a child learns. Doing so we utilize human’s inherent ability to self-reflect and judge without the need to label massive amounts of training data as required by conventional machine learning applications. The communication between a human and its robot-assistant relies on existing technologies such as textual inquiries, voice-bots or 2-D interactive avatars. 3-D Humanoids or similar ‘robotic’ devices might be applied as well. The primary focus regarding this new AI-approach is the inherent artificial intelligence provided by the device and not its physical representation. In the future we might see new virtual reality technologies, making the interaction more attractive and efficient. Adding the capacity for observing behavior and external events based on real-time sensing technology generates a knowledge base way beyond our ability to memorize and interpret the continuous influx of information we are exposed to. Hence, the primary design-goal of the robot is to support humans to overcome limits of learning and to provide support in the decision-making process. Through this interaction the robot also assists humans in their personality development process and its associated issues of ethical concerns. Most importantly, human motivation remains the driver that controls this process. How much authority we want to give these ‘assistants’ in carrying out specific tasks is up to us, respectively the organization engaged in AI-focused automation.

Prerequisites for an Artificial Self

In a research paper Prerequisites for an Artificial Self (nih.gov) , published by the Journal ‘Frontiers in Neurorobotics’ in February 2020, the authors apply  both analytic sciences such as psychology and neuroscience as well as synthetic sciences such as robotics to investigate the developmental processes that shape the self. People can usually easily recognize their own body and the results of their own actions. Topics such as body ownership and agency that have traditionally been investigated in philosophy have recently gained attention from other disciplines, such as brain-research, cognitive and behavioral neurosciences and robotics. In the field of AI, an intelligent agent is an autonomous artificial entity that interacts reactively with the environment, socially with other agents, and proactively in a goal-directed manner. In robotics, intuitive human interaction in natural and dynamic environments becomes more and more important and requires skills such as self-other distinction and an understanding of agency effects. Developmental robotics addresses this challenge by implementing methods and algorithms for motor and cognitive development in artificial systems inspired by the way children learn. The emergence of the self represents a key step in cognitive development. Therefore, there is a growing interest in the developmental robotics community on implementing processes capable of enabling the experience of the self—with phenomena such as sense of body ownership and agency—in artificial agents. On the other hand, robots can represent valuable tools to investigate phenomena of subjective experience typical of humans. What the robot sees and perceives can be logged and further analysed which is obviously not possible in humans. If robots were capable of detecting and recognizing their own body and related movements, their interaction with the environment and with people would significantly raise the scope and value of potential applications. However, the questions about which computational processes are needed to implement a primitive sense of body ownership and agency in robots, and of how this ontogenetic process of the individual shapes the development of the self, are still unresolved. 

Functionality of future AI Agents

In a research paper Who Am I?: Towards Social Self-Awareness for Intelligent Agents (ijcai.org) , presented at the ‘Twenty-Ninth International Joint Conference on Artificial Intelligence’ in July 2020, the authors suggested that to make an intelligent agent social and co–existing with people, the agent needs to know and understand its own self besides knowing the self of others. As agents get much closer to humans in their work and general tasks, they need to be aware of their surroundings and motives. It is crucial to ensure that agents interact safely with humans, taking advantage from their willingness to collaborate. One illustrative example has been outlined in a classic science fiction story ’Runaround’ (Asimov, 1942) which is famously known for the ’three rules of robotics’ stating that an agent (robot) shall not harm a human being, shall obey orders from humans, and shall protect its own existence as long as it does not compromise the other rules. The agent can behave as an individual or be a part of a group of co-existing others, a functionality which is key in managing organizational issues. In a human context the self is necessary for an individual to bridge the internal dynamics of one’s mind with the social world and making sense of them. Current AI programs and agents learn and acquire knowledge from a huge amount of data without comprehending the meaning implied. The underlying principles, methods and frameworks to make an agent self-aware are still lacking. However, as progress in brain-research and behavioral neuropsychology is augmenting deep-learning and artificial neural network technology, we are likely to experience a new phase in AI’s history. Considering the exponential growth of AI-research over the last three decades, we might well witness the emergence of a new generation of powerful self-aware AI agents by the end of this decade.


Realtime sensing of behavioral and physical body data is rapidly advancing. IoT based edge-computing devices with low power consumption and high computational capacity implemented with neuromorphic technology, provide local intelligence without cloud access. However, the fear that this data is being misused will only diminish if the user has the assurance that the data remains under his control. Hence, new decentralized system architectures are required for rebuilding trust. As long as the global community of nations does not support severe punishment against the rising tide of cybercrime within the next year or two, unprecedented war-like scenarios might emerge.

Leave a Reply

Your email address will not be published. Required fields are marked *