Intelligent Robot Credit:optimistdaily.com
Artificial Intelligence (AI) and robotics have the potential to enhance human capabilities and to provide answers to the many difficult problems we are facing today. On one side AI-systems process massive volumes of data to identify patterns and generate predictions, leading to better decisions and higher productivity. Robots, on the other side, carry out jobs that are both mundane and potentially hazardous or physically taxing. By combining AI and robotics, humans are in a position to construct machines that see, hear, feel, taste, touch and even think for themselves. In this essay, three contrasting research efforts, all seeking to achieve this goal, are reviewed.
First Approach: Robots to Enhance Cognition of AI-Systems
According to research conducted at the University of Sheffield, connecting AI-systems to the real world with robots and designing them by using principles from evolution is the most likely way AI will gain human-like cognition. Tony Prescott and Dr. Stuart Wilson from the Department of Computer Science say that AI-systems are unlikely to match brain performance no matter how large their neural networks or their datasets to train them might be. Although these models have similarities with the human brain, there are important differences which are preventing them from gaining biological, human-like intelligence. First, real brains are embodied in our body to sense and act in a real-world environment. Second, human brains are made up of multiple subsystems which are organised and configured with a structure called ‘architecture’. The Sheffield study suggests that biological intelligence has developed because of this specific architecture, using its connections to the real world to learn while improving its capabilities of brain-functionality throughout evolution. This interaction between evolution and brain development is rarely factored into the design of AI-systems. ChatGPT and other transformer models have advanced AI significantly, however AI-systems based on this technology are unlikely to advance us to the point where they can fully think like a brain. According to the researchers it is much more likely that AI-systems will develop human-like cognition if they are built with architectures that learn and improve in similar ways to the human brain. In their view robots can provide AI-systems with sensors such as cameras, microphones and actuators and based on this integration, these systems will be capable of sensing the world around them and learning from this experience of interaction.
Second Approach: Biological Neurons to Integrate AI and Robotics
In an interview conducted by VentureBeat on August 2, 2023, Daniela Rus, director of MIT CSAIL, made the statement that robots cannot really process large language models because their computation power and storage capacity is not sufficient to perform this task. Rus and her collaborators intend to create neural networks that are both accurate and compute-efficient so that they could run on the robot’s computer without the need to be connected to the cloud. With this mindset they were inspired by research of biological neurons found in small organisms, such as the C. Elegans worm which performs complicated tasks with no more than 302 neurons. Described as liquid neural networks (LNN), one of the most striking features is their compactness. A classic deep neural network requires around 100,000 artificial neurons and half a million parameters to perform a task like keeping a self-driving car in its lane. In contrast, Rus and her colleagues were able to train an LNN to accomplish the same task with just 19 neurons. This significant reduction in size has several important consequences. First, it enables the LNN to run on small computers used in robots equipped with tiny edge-based sensors. Second, with fewer neurons the network becomes much more interpretable. Interpretability is a significant challenge for AI. With traditional deep learning it can be difficult to understand how an AI-model arrived at a particular decision. “When we only have 19 biological neurons, we can extract a decision tree that corresponds to the firing patterns and the decision-making flow of an AI-system with 100,000 artificial neurons or more”, Rus said. Another challenge that LNNs address involves causality. Traditional deep learning systems often struggle to understand causal relationships, with the result that they learn false patterns that are not related to the problem they are trying to solve. LNNs, on the other hand, appear to have a better grasp of causal relationships, allowing them to better generalize in unknown situations. So far the MIT CSAIL researchers have tested LNNs in single-robot settings with very promising results. In the future, they plan to extend their tests to multi-robot systems to further explore the capabilities as well as the limitations of LNNs.
Third Approach: The Sanctuary Project
In the 1950s, researchers, including AI pioneer Herbert A. Simon, were convinced that Artificial General Intelligence (AGI) would be realised within the next few decades. Since then, AGI has proven to represent a milestone which might be impossible to achieve. Still, others insist that AGI is close to being realised. One such individual is Geordie Rose who co-founded the quantum computing start-up D-Wave Systems in 1999. Rose’s newest venture, Sanctuary, raised USD 58.5 million in venture capital in 2022 to build the world’s first general-purpose robot with human-like intelligence. Rose claims that a combination of breakthrough technology in machine learning, theoretical physics and quantum computing will help to achieve Sanctuary’s ambitious vision: to build robots that mimic the human brain. Before his engagement at Sanctuary, Rose together with Suzanne Gildert launched ‘Kindred’, a secretive AI-company based in Vancouver, Canada. With funds from Google, Bloomberg and others, Rose and Gildert embarked on a mission to build robots that can learn by observing tasks that humans perform. This technique called ‘imitation learning’ has been widely applied in robotics for several years. Researchers at Google, for example, have ‘taught’ robots to learn how to walk by observing and mimicking a dog’s movements. But while imitation learning has significant potential, it is not without shortcomings. Systems trained with imitation learning do not generalize well in scenarios that were not included in the training data. According to Matthew Guzdial, a University of Alberta assistant professor focusing on machine learning, Sanctuary’s claims are definitely over-hyped. Their claims add nothing new to the many robotics start-ups that are presently entering the market. When contacted for comment, Rodney Brooks, a well-known professor of robotics at MIT’s Computer Science and Artificial Intelligence Lab said he could not find anything coherent enough on Sanctuary’s website to evaluate the project as Sanctuary declined to provide details about its technology beyond what has been made public so far.
Assessing Today’s AI-Supported Robot Applications
AI provides the necessary intelligence and decision-making abilities for robots to perceive, understand and interact with their environment. It enables robots to process and analyse sensory data using methods used in computer vision or sensors developed for edge-based devices. These technologies enable robots to perceive their environment, recognize objects and navigate through complex surroundings. AI-algorithms – particularly those applied in machine learning – enable robots to learn from data, improve their performance over time and make predictions or decisions based on patterns and experiences. However, there still exist some fundamental limitations such as:
Lack of human-like general intelligence: AI-systems excel at specific tasks but lack the broad-based intelligence and adaptability of human beings. They struggle with tasks that require common-sense reasoning, creativity and contextual understanding.
Data dependency: AI-systems heavily rely on data for training and decision-making. They require vast amounts of high-quality data. Their performance is limited when faced with insufficient or biased data. Moreover, the use of synthetic data can produce wrong results.
Limited social interaction and empathy: AI lacks proper emotional understanding and empathy, although it is able to mimic human-like interactions. Despite this ability, AI-systems lack emotional understanding.
Legal and regulatory complexities: The rapid development of AI and robotics poses challenges in respect to legal frameworks, liability and accountability. Regulators are challenged to keep pace with ongoing technological advancements to ensure responsible and ethical behaviour.
Which one of the different research efforts reviewed will advance AI-Robotics is hard to predict. We can expect, however, that its capabilities will grow at an exponential rate. Destructive applications such as killer-robots for the military or health-focused robots to perform surgery will provide new insights to the future of automation. Meanwhile regulators and government authorities are challenged to provide responsible and ethical guidelines to prevent misuse.