The Need To Manage Future AI Agents In Decision Making

Posted by Peter Rudin on 18. October 2024 in Essay

The Rise of AI agents    Credit:linkedin.com

Introduction

In order to handle enormous amount of data and to act as a filter to improve  decision making, autonomous Artificial Intelligence (AI) agents are being rapidly deployed in  corporate work environments. Automating tasks, such as recommendations for customers based on detailed analyses of their purchase histories, preferences and questions or concerns are some of the applications which can improve decision making. By recognizing significant market opportunities with the support of AI agents, creative marketing campaigns can be developed for reaching potential customers based on their existing consumer profile to increase sales and improve profit margins.

Definition

According to Wikipedia, an AI agent is an intelligent system that can understand and respond to customer inquiries without human intervention. Depending on corporate requirements, different designs of AI agents are available. They rely on machine learning and natural language processing (NLP) to handle a wide range of tasks. Most importantly, AI agents can continuously improve their own performance. This is distinct from traditional AI, which requires human input for specific tasks. An AI agent perceives the environment, takes actions autonomously in order to achieve goals, and may improve the performance by learning or acquiring knowledge. An intelligent agent may be simple or complex: A thermostat or other similar control system is considered an example of simple AI agent. In contrast, a human individual or any system and organisation that apples the design principles of an AI agent is considered a complex application. Some AI agents have an ‘objective function’ that encapsulates all of the goals of the agent. Such an AI agent is designed to create and execute a plan that – upon completion – maximizes the expected value of the objective function defined by the user. Another AI agent might be focused on reinforcement learning with a ‘reward function’ that allows the programmer to shape the desired behaviour of the model selected. AI agents  are typically applied  in economics while  various versions of the AI agent paradigm are also considered in cognitive science, ethics, philosophy  as well as in interdisciplinary socio-cognitive and computer social simulations.

The Problem of Trust

Implementing AI agents demands the trust of human beings who have not yet worked with the technology. According to a survey by Salesforce, a company providing products and services for the application of AI agents, just 7% of desk workers today consider AI trustworthy enough for job-related tasks while 77% of global workers are optimistic they will eventually trust the results, stating that human control will be the key for building this trust.  For autonomous AI agents to be applied, businesses and employees will need to overcome the trust gap and go through considerable training to effectively understand, manage and make the most of this important technology. As experienced in the case of digital transformation, to implement AI agents is a journey that is different for each company, according to Mick Costigan, VP of Salesforce. Companies are at different starting points with their existing AI infrastructure, tools and talent in order to implement AI agents. Especially for those in the earlier stage, companies need to start their AI journey with training and building trust. Autonomous AI agents need a steady supply of trustworthy data to operate efficiently and as a result, deliver accurate output. However, most organizations struggle to access all of the relevant data required, because it is often trapped in ‘silos’ that hinder digital transformation and the implementation of AI solutions. In contrast with an AI agent in place and a customer having a problem with a product, for example, the providing company could start a conversation using their agent to look at the customer’s purchase history and – using a company’s own knowledge base – automatically suggest a few troubleshooting techniques. If that does not work, the agent could ask the customer to upload a picture of the error code he sees on his screen to analyse the problem in detail and determine if the product needs to be exchanged. Moreover, the agent could proactively suggest replacements or upsell the customer to use another product.

Design Principles for AI Agents

To deploy AI agents, the following lists some of the best practices to consider:

Define clear objectives; Start by defining what you want to achieve. Whether it is reducing response times, enhancing customer satisfaction or cutting operational costs, clear objectives will guide your implementation process and help you measure success.

Assess the quality of your data; AI agents rely on high-quality data to function effectively. Ensure that you have robust data collection and management systems in place. This includes customer interaction data and other relevant information. Clean and structured data will enable your AI agent to provide accurate and relevant responses.

Plan for human oversight; While AI agents can handle many tasks autonomously, it is important to have a plan for human intervention when necessary. Ensure that there are clear guidelines when and how humans should step in to assist.

Ensure data privacy and security; Implement robust data privacy and security measures to protect customer information accessed by your AI agents. This includes compliance with data protection regulations and regular security audits to safeguard sensitive data and maintain customer trust.

Benefits of Intelligent Agents

The adoption of AI agents for customer service, for example, offers numerous benefits, transforming the way businesses interact with their customers and manage their service operations. As a result, AI agents can improve customer satisfaction and personalize interactions. Because they learn over time, they are geared toward continuous improvement of the services offered. Unlike most human agents, AI agents are available around the clock, ensuring that customer inquiries are addressed promptly, regardless of time zones or business hours. This continuous availability helps businesses meet customer expectations and improves customer loyalty. AI agents can easily scale to handle increased volumes of customer interactions, making them ideal for businesses looking to grow without compromising service quality. As case volume increases, AI agents can be easily adjusted to handle the additional load, ensuring consistent and reliable support without loss of quality. Moreover, AI agents generate valuable data on customer interactions, preferences and behaviours. Businesses can use this data to gain insights into customer needs and trends, enabling them to make informed decisions and improve their service and product offerings. In addition, AI agents provide consistent and accurate responses to customer inquiries, reducing the risk of errors and ensuring that customers receive reliable information. This consistency helps to build trust and confidence in the brand. 

Risks of Intelligent Agents

AI agents amplify the potential for harm by individuals or groups with malicious intent. Such agents could automate complex tasks that typically require significant human expertise. For example, agents capable of conducting scientific research autonomously, could potentially accelerate the creation of dangerous biological or chemical agents. Persuasive AI agents could influence campaigns, manipulating public opinion or engage in propaganda at an unprecedented scale. As AI agents become more capable, there is a risk of over-reliance, because humans might defer to these agents for critical decisions. This dependence could also be problematic if agents malfunction due to design flaws, adversarial attacks or other unforeseen issues. Such malfunctions might not be immediately noticeable, especially to users lacking the expertise to discern them. When multiple AI agents interact, they could give rise to complex dynamics and emergent risks not present when individual agents perform distinct tasks. Interactions between agents could lead to destabilising feedback loops or systemic vulnerabilities, especially if many agents share common components or are derived from the same foundational models. Understanding these risks requires a holistic view of the ecosystem of AI agents and their interdependencies. AI agents might create or employ sub-agents to fulfil tasks more efficiently or effectively. While this could enhance the agents’ capabilities, it also potentially introduces new points of failure and magnifies existing risks. Each sub-agent could malfunction, be susceptible to attacks or act contrary to the user’s intentions, complicating the task of mitigating harm.

Conclusion

As AI agents are deployed, their potential to act with increased autonomy and over extended periods of time introduces a spectrum of risks that are distinct from conventional AI systems. These risks necessitate a nuanced approach to governance and oversight, emphasizing the need for mechanisms that enhance visibility to the operations and interactions of AI agents. Above all, to maintain human oversight for corrective actions is essential for achieving the potential benefits of AI agents.

One Comment

  • Optimistic as ever…..! Our experience with high-tech intelligence and AI: despite all the euphemistic propaganda such as: ” for the benefit of mankind” “to improve diagnosis and treatment” ” to avoid human mistakes” “better, cheaper, more efficient…..” – well our experience is: serious interference in democratic processes (Cambridge analytics, elections in UK and US, Brexit), disinformation that cannot be recognized as such immediately leading to fascist election results….; frustrating “chats” with chat-bots which never have the answer to the questions asked; increasing depression, insecurity, in our youngsters with decreasing attention spans due to the “social” media…..

Leave a Reply

Your email address will not be published. Required fields are marked *