The AI-Agent Credit: www.probecx.com
Researchers have created programming tools for generating content since the early days of AI. The earliest approaches, known as expert systems, defined the rules for generating responses or data sets for solving a given problem. The Eliza chatbot created by Joseph Weizenbaum in the 1960s was one of the earliest examples of generative AI. However, these early implementations failed due to a limited vocabulary and lack of context. In contrast Artificial Neural Networks (ANNs), representing the foundation of today’s AI and machine learning applications, took a different approach. Designed to mimic how the human brain works, ANNs ‘learn’ the rules from finding patterns in existing data sets. Developed in the 1950s and 1960s, the first ANNs were limited by a lack of computational power and small data sets. With the availability of big data and improvements in computer hardware less than a decade ago, generative AI and the modelling of agents has started to disrupt the entire AI-industry.
Generative AI: A new Milestone in Computing
In 2017 Google described a new type of neural network architecture that brought significant improvements in efficiency and accuracy to tasks like natural language processing. This breakthrough approach, called transformers, was based on the concept of attention which refers to the mathematical description of how words relate, complement and modify each other. The researchers described the architecture in their seminal paper, “Attention is all you need,” showing how a transformer-based neural network was able to translate between English and French with more accuracy and in only a quarter of the training time than other neural nets required. This breakthrough technique was also able to discover hidden relationships buried in the data that humans might have been unaware of because they were too complicated to formulate. Since it was introduced by Google – with its ability to provide better tools to deal with big data – transformer architecture has evolved rapidly, especially in the design of Large Language Models (LLMs) such as GPT-3 and ChatGPT. However, the rise of generative AI is also fuelling various concerns. Despite their promise, these new generative AI tools have problems in respect to accuracy, trustworthiness, bias, hallucination and plagiarism as well as ethical issues that will take a long time to sort out. New is that the latest generation of generative AI seems to be more coherent on the surface. But this combination of human-like language and coherence is not synonymous with human intelligence, and there is a great debate among AI researchers whether generative AI models can be trained to have reasoning ability. Moreover, the convincing realism of generative AI content also introduces a new set of AI risks. It makes it harder to detect AI-generated content and makes it more difficult to detect when things are wrong. This can be a big problem when one relies on generative AI results to write code or provide medical advice. Many results of generative AI are not transparent, so it is hard to determine if, for example, they infringe on copyrights or if there is a problem with the original sources from which they draw results. If one does not know how the AI got to a conclusion, one cannot reason about why it might be wrong.
The Concept of AI-Agents
Concepts of intelligent agents have become very popular not only because of their potential to act autonomously but also because they can be programmed to sense their environment and to make judgements based on their past experiences. Generative AI-agents can interact with each other to create and generate content in various forms such as text and audio. What makes the AI-agent attractive is the ability to observe and mimic human behaviour and to learn from real-world environments. Generative AI-agents are designed to not only accept commands and respond accordingly but also to create a robust mechanism to take valid decisions at the right time. They have the capability to generate knowledge based on a framework which consists of the following interconnected components:
- Perception: This refers to how the AI-agent accepts data from the surroundings. Perception enables the agent to store and prioritises what memories are stored, making it a first crucial stage of interaction with other agents.
- Memory Storage: Defines the database where the agent stores and accesses all its data. What makes the AI-agent smart is how it prioritises the content to be stored. For instance, recent memories have more relevance and carry more weight in the decision-making process.
- Memory Retrieval: Once the data is stored, the agent can recoup relevant memory information required for further action. The criteria for retrieval may include how recent and relevant the data is and how important it is for solving a specific task.
- Reflection: The retrieved memories are analysed based on the goals and objectives the AI-agent has been provided with through human interaction. These periodically generated conclusions are fed back and stored with the existing memory thereby raising its intellectual capacity. This ability for reflection is crucial for the acceptance of AI-agents as they represent a new tool for planning.
Generative AI-agents consist of software and are processed on high-performance hardware to simulate human behaviour. They are useful in all situations where people require real human-like assistance for solving problems. Accordingly, they are applied in many industry sectors ranging from healthcare and agriculture to finance and education.
Simulating Societal Behaviour with multiple AI-Agents
Taking the concept of AI-Agents a step further, in a recent study entitled “Generative Agents: Interactive Simulacra of Human Behavior,” Stanford University researchers explored and simulated the potential of generative models in creating an AI-agent architecture that remembers its interactions, reflects on the information it receives, and plans long- and short-term goals based on a steadily expanding memory capacity, supported by high-performance data centers. These AI-agents can simulate the behavior of a human in their daily lives, from mundane tasks to complex decision-making processes. Moreover, when these agents are assembled to create a network, they can emulate the more intricate social behaviors that emerge from the interactions of a large population. This opens many possibilities, particularly in simulating population dynamics, offering valuable insights into societal behaviors and interactions. Human users can also interact with the agents by speaking to them through a narrator’s voice, altering the state of the environment, or directly controlling an agent. The interactive design is meant to create a dynamic environment with many possibilities. To run a simulation of the agent network, each agent starts with some basic knowledge, daily routines, and goals to accomplish. Through these interactions, agents might pass on information to each other, and as new information is diffused across the population, the agent’s community behavior changes. As a result, agents react by changing or adjusting their plans and goals as they become aware of the behavior of other agents. The researchers’ experiments show that the generative agents learn to coordinate among themselves without being explicitly instructed to do so. For example, one of the agents started out with the goal of holding a virtual Valentine’s Day party. This information eventually reached other agents, and several ended up attending the party. Another application is prototyping the dynamics in mass-user products as advertised by social networks. By creating a diverse population of agents and observing their interactions within the context of a new product, researchers can study emerging behaviors, both positive and negative. The agents can also be used to experiment with counterfactuals and simulate how different policies and modifications in behaviour can change outcomes.
The components of a proposed agent architecture built on observation, reflection and planning are key for the success and credibility of a new generation of AI-agents. However, the potential of these generative agents is not without risks. They could be used to create bots that convincingly imitate real humans, potentially amplifying malicious activities like spreading misinformation. Hence, the societal impact of networked agents could cause significant harm by negatively influencing human’s free will on socio-economic issues. Moreover, we might see a revival of agents as they are portrayed in science fiction scenarios, for example in the 1999 movie The Matrix. To counteract this negative impact, the researchers propose to maintain audit logs of the agents’ behaviours to provide a level of transparency and accountability to maintain the trust required for the user-acceptance of AI-agents. Most above all looms the question how humans can maintain control over these agents.