Truth and AI: Can Emotional Intelligence (EQ) Restore Trust?

Posted by Peter Rudin on 19. May 2023 in Essay

EQ and Leadership


We intuitively know that suspicion destroys our desire and believe that information or ideas are true. If one is suspicious of major institutions, researchers, doctors, teachers and even one’s neighbours, it becomes impossible to gather information that can be the basis of sound decision-making grounded in truth. Real-world experiences are necessary for the development of an individual’s intellect. Resolving the problems of life, he or she is exposed to reality where trust is a prerequisite. We know that digital talent as analysed and measured by the IQ (Intelligent Quotient) is scarce and the corporate run for ‘the best brains’ is in full swing. However, to overcome the current AI-credibility problem imposed by technologies like GPT4, applying the EQ (Emotional Quotient) offers one solution to tackle the problem of truth to restore trust in AI.

What is Truth?

According to an  article What Is Truth? | Issue 86 | Philosophy Now, published in 2011, true beliefs portray the world as it is while false beliefs portray the world in a different form. Truth is both logical and empirical. When we put a measuring-ruler under water we get the wrong visual impression that the ruler is not straight. The ‘principle of non-contradiction’ states that contradictory propositions  such as ‘the ruler is straight’ and ‘the ruler is not straight’ cannot be true at the same time. In theory observation should settle which statement is correct. In practice things are not so simple. After all, if true beliefs describe the world, and the world changes, then truth must change too. However, relativists point out that we construct our own truth while sceptics believe that this principal theory proves that there is no such thing as truth. Hence, we are guided by an empirical principle whereby nature is uniform and not unpredictable. Solid objects are usually not deformed by immersion in water. Following this line of thought we can define truth as a thought-model that is independent of observation. Optical effects, resulting from refraction of light explain why the ruler appears bent but in reality is straight. As one applies scientific thought, reaching a consensus on an objective description of our environment and reality seems possible. In philosophy, however, truth is represented by our mind, appreciating both the unity of reality and the diversity of experience.

From Truth to Emotional Trust in AI

We need a minimal level of trust to use any type of new technology. Some of this trust is based on rational thinking and some is grounded in emotions. But what happens when emotions overshadow the rational evaluation of the technological ability to solve problems and why is this important?  Research shows that the more technology is represented as a living organism, the more we like it and believe in its capabilities and moral values. In several studies, researchers found that people tend to trust anthropomorphic robots, even when their poor performance was evident.  Although these studies were performed in labs where the actual implication of robotic performance is questionable, they raise the question as to how the power of human emotions affects our basic trust in technology. For example, the more complex the outcomes of an algorithmic computation are, the more difficult it is to correctly evaluate its reliability. Hence, the role emotions play in the evaluation of technology becomes more significant as the disassociation between technology’s capability and its actual reliability and performance can be highly problematic. As result, there is a growing need to better understand how to balance the positive emotions evoked by technology’s achievements with the rational evaluation of its limitations for solving real-world problems.

Balancing Emotional and Artificial Intelligence

Emotional intelligence has always been important, but cultivating the competencies that underpin emotional intelligence will be increasingly significant as automation and AI replace many human tasks. As a result, the human workplace is more and more defined  by the skills required for solving problems. While AI-systems can perform complex tasks, there are certain areas where their capabilities remain limited at least for the next decade. For example, AI-systems do not have the ability to empathize with others or to understand the emotional nuances of a real-life situation. This is where emotional intelligence comes into play. In his bestselling book Emotional Intelligence, Daniel Goleman suggests that the EQ might be more important than the IQ because this standard measure of intelligence is too narrow, and the IQ does not encompass the full range of human intelligence. The psychologist Howard Gardner, Professor of Cognition and Education at Harvard University has suggested that intelligence is not simply a single general ability. Instead, he suggests that there are multiple intelligences and that individuals may have strengths in a number of these areas. At one point in time the IQ was viewed as the primary determinant of success. People with high IQs were assumed to be destined for a life of accomplishment and achievement. At the same time researchers debated whether intelligence was the product of genes or the environment. Today however,  it is widely accepted that the IQ is not the single determinant of success. It is part of a complex array of factors, one of which includes emotional intelligence.

The Relationship between Emotional Intelligence (EQ) and Cognitive Intelligence (IQ)

The researchers Peter Salovey and John Mayer define emotional intelligence as ‘the ability to monitor one’s own and other people’s emotions, to discriminate between different emotions and to use emotional information to guide thinking and behavior.’ Just as the IQ defines one’s ability to process information, the EQ defines one’s ability to process emotions and to make sound decisions. Applied as a foundation of their intelligence test, four factors define one’s EQ:

  • Perceiving emotions describes how well we pick up emotional cues in others.
  • Reasoning with emotions describes how we respond emotionally to things that garner our attention.
  • Understanding emotions describes how well we interpret perceived emotions.
  • Managing emotions as the ability to handle one’s own emotions (positive or negative).

Today when much of our time is spent interacting with intelligent machines, we need to cultivate these emotional characteristics of intelligence. Following this line of thought, two unique human capabilities must be empowered to optimize man-machine interaction:

Curiosity as a fundamental human trait that drives innovation and progress. It is the curiosity of the human mind that has led to remarkable discoveries and breakthroughs throughout history. As we embrace AI-technology, it is as vital to encourage and nurture curiosity in ourselves and others. This includes asking questions, challenging assumptions and seeking new ways of thinking and problem-solving. Curiosity allows us to explore the unknown, adapt to change and continuously learn and grow.

Creativity as the ability to think imaginatively and come up with new ideas, solutions and perspectives. While AI can analyse data to generate output based on patterns, it lacks the creativity that humans possess. Hence, individuals who apply ingenuity in combination with creativity can render their jobs more interesting because they are automating tasks that are basically very boring and repetitive.

As result of merging the capabilities of intelligent machines with these unique human traits, innovation is strengthened and productivity is raised, thereby achieving sustained competitiveness in a rapidly changing market.


In a remarkable move, Geoffrey Hinton, the neural network pioneer considered to be the Godfather of AI, resigned from his consulting job at Google. As one reason for his decision, Hinton states that the dangers of chatbots such as GPT4 were ‘quite scary’ and warns that they could be exploited by ‘bad actors’. He is concerned that people will not be able to discern what is true anymore as AI-generated photos, videos and texts are flooding the internet. Moreover, he is concerned that AI might upend the job market and warns about the ‘existential risk’ posed by the creation of true digital intelligence. Science-Fiction, once popularized by Nick Bostrom and his best-selling book ‘Superintelligence’, is getting a revival. Hence, introducing the EQ as a guard against the potential misuse of AI seems more urgent than ever. The dual approach to promote IQ and EQ as part of corporate development promises to free up the resources needed to successfully apply AI. AI has and will surpass us in many segments of knowledge-generation. But if properly applied, it is a powerful tool that can lead us towards the goal of a human-centric life.

Leave a Reply

Your email address will not be published. Required fields are marked *