Trust and Emotions Credit: forbes.com
Introduction
Artificial Intelligent Agents are autonomous software tools that perform tasks, make decisions and interact with their environment intelligently and rationally. However, trust is the prerequisite for any successful human-AI collaboration. Above all, AI-Agents depend on machine learning models, which can never be 100% accurate. In the Agent-To-Agent (A2A) economy, AI software agents transact, negotiate, and collaborate directly with one another, taking over the tasks humans once performed. Yet, humans and their emotional state will not disappear from the work environment, but their roles will shift.
The Impact of Trust
We need a minimal level of trust to implement any type of new technology. Some of this trust is based on rational thinking and some is grounded in emotions. But what happens when emotions overshadow the rational evaluation of the technological ability to solve problems and why is this important? Research shows that the more technology is represented as a living organism, the more we like it and believe in its capabilities and moral values. In several studies, researchers found that people tend to trust anthropomorphic robots, even when their poor performance was evident. Although these studies were performed in labs where the actual implication of robotic performance is questionable, the results raise the question as to how the power of human emotions affects our basic trust in technology. For example, the more complex the outcomes of an algorithmic computation are, the more difficult it is to correctly evaluate its reliability. Hence, the role emotions play in the evaluation of technology becomes more significant as the disassociation between a technology’s capability and its actual reliability and performance can be highly problematic. As a result, there is a growing need to better understand how to balance the positive emotions evoked by technology’s achievements with the rational evaluation of its limitations for solving real-world problems.
Balancing Emotional and Artificial Intelligence
Emotional intelligence has always been important, but cultivating the competencies that underpin emotional intelligence will be increasingly significant as automation and AI replace many human tasks. As a result, the human workplace is more and more defined by the skills required for solving problems. While AI-systems can perform complex tasks, there are certain areas where their capabilities remain limited at least for the next decade. For example, AI-systems do not have the ability to empathize with others or to understand the emotional nuances of a real-life situation. This is where emotional intelligence comes into play. In his bestselling book Emotional Intelligence, Daniel Goleman suggests that the emotional quotient (EQ) might be more important than the intelligent quotient (IQ) related to AI, because this standard measure of intelligence is too narrow and the IQ does not encompass the full range of human intelligence. The psychologist Howard Gardner, Professor of Cognition and Education at Harvard University has suggested that intelligence is not simply a single general ability. Instead, he suggests that there are multiple intelligences and that individuals may have strengths in a number of these areas. At one point in time the IQ was viewed as the primary determinant of success. People with high IQs were assumed to be destined for a life of accomplishment and achievement. At the same time researchers debated whether intelligence was the product of genes or the environment. Today however, it is widely accepted that a high IQ is not the single determinant of success. It is part of a complex array of factors, one of which includes emotional intelligence.
The Relationship between EQ and IQ
The researchers Peter Salovey and John Mayer define emotional intelligence as ‘the ability to monitor one’s own and other people’s emotions, to discriminate between different emotions and to use emotional information to guide thinking and behaviour.’ Just as the IQ defines AI’s ability to process information, the EQ defines one’s ability to process emotions and to make sound decisions. Applied as a foundation of their intelligence test, four factors define one’s EQ:
- Perceiving emotions describes how well we pick up emotional cues in others.
- Reasoning with emotions describes how we respond emotionally to things that garner our attention.
- Understanding emotions describes how well we interpret perceived emotions.
- Managing emotions as the ability to handle one’s own emotions (positive or negative).
Today when much of our time is spent interacting with intelligent machines, we need to cultivate these emotional characteristics of intelligence. Following this line of thought, two unique human capabilities must be empowered to optimize man-machine interaction:
Curiosity as a fundamental human trait that drives innovation and progress. It is the curiosity of the human mind that has led to remarkable discoveries and breakthroughs throughout history. As we embrace AI-technology, it is as vital to encourage and nurture curiosity in ourselves and others. This includes asking questions, challenging assumptions and seeking new ways of thinking and problem-solving. Curiosity allows us to explore the unknown, adapt to change and continuously learn and grow.
Creativity is the ability to think imaginatively and come up with new ideas, solutions and perspectives. While AI can analyse data to generate output based on patterns, it lacks the creativity that humans possess. Hence, individuals who apply ingenuity in combination with creativity can render their jobs more interesting because they are automating tasks that are basically very boring and repetitive.
As a result of merging the capabilities of intelligent machines with these unique human traits, innovation is strengthened and productivity is raised, thereby achieving sustained competitiveness in a rapidly changing market.
The GPT-5 Announcement
On August 7, 2025 OpenAI released its long-awaited GPT-5 product, positioning it as the company’s ‘smartest, fastest and most useful product yet.’ The launch, however, was met with a divided response. While the company touted GPT-5 as a significant leap in intelligence, the tech community’s reaction ranged from praise for its new capabilities to criticism over a rocky rollout and an indication that progress, while tangible, may no longer be exponential. The release has ignited a debate as to whether the AI industry is entering a new, more mature phase where incremental product improvements replace groundbreaking leaps in capability. While some may be disappointed and see GPT-5 as a sign of the field plateauing, in reality, it might mark the start of the era of building AI applications at scale. The GPT-5 launch was not without its flaws. The live-streamed presentation contained mislabelled graphs and technical bugs. Power users and data scientists quickly found that competing models, such as Anthropic’s Claude Opus 4.1, sometimes performed better on real-world coding tasks. This combination of launch hiccups and performance inconsistencies led to a lukewarm reception within the AI community. While some early testers praised the model’s intelligence and steerability, the prevailing sentiment was one of disappointment, especially given the years of hype that preceded the announcement. AI researcher Yannic Kilcher described the current moment as the ‘Samsung Galaxy era of LLMs’, where each new model offers incremental improvements – a slightly better camera or faster processor – rather than groundbreaking new features. According to his view, the industry’s focus has shifted from the pure pursuit of artificial general intelligence (AGI) to productization, with companies gearing models toward specific, money-making use cases like coding. The alternative perspective is that progress has not stalled but has merely changed its form.
GPT-5’s Lack of Emotional Intelligence
For many users, ChatGPT is not just a productivity tool. In addition, it is also a tool to process the emotional side of many life-changing decisions. You cannot just give someone a perfect medical treatment plan and then ignore the part of your body that actually hurts. And yet the entire launch of GPT-5 treated emotional use cases as if they did not exist, or as if they are too embarrassing to acknowledge. Maybe it is because most of the people building GPT-5 are young engineers who are totally brain focused nerds. Maybe it is because emotion does not fit neatly into a benchmark score. But if OpenAI is going to ignore the fact that millions of users also use AI-tools for emotional connection, regulation and support, then they are ignoring one of the most powerful, human uses of AI entirely. GPT-5, like its predecessor, is useful for solving many repetitive tasks. Only time will tell if its absence of emotional connectivity, for example in decision making, still can sustain the staggering evaluation at the stock market.