Deepfake Credit: eu.usatoday.com
AI systems can mimic human intelligence with impressive results such as detecting objects, navigating through environments, playing chess or generating text. Unlike fictitious characters these AI clones are based on real people, mimicking their visual likeness, conversational mannerisms or behavioural patterns. But mimicking and cloning human behavior has its limitations. Without complementing actions with thought, AI systems can become brittle and make unpredictable mistakes when faced with novel situations. Many deep learning systems are trained on data generated by humans. Training data can consist of the list of moves in a chess game or the sequence of actions in a strategy game. By training on a large enough dataset, the AI agent will be able to create a model of human behavior. But while the model can learn to mimic human behavior it does not learn the reasoning behind those actions. To tackle this problem scientists at the University of British Columbia have developed an AI model dubbed ‘Thought Cloning’.
What is Behavioural Cloning?
Behavioural cloning consists of deep-learning algorithms whereby existing audio-, photo- and video-records can be manipulated. Dubbed as ‘deepfakes’ this technology is also used to generate hyper-realistic videos and photos, making it difficult to distinguish what is real and what is fake. One application of behavioral cloning is the design and production of avatars as ‘human-like’ experts. They can be contacted both visually and conversationally, anytime from anywhere in the world to discuss topics such as medical advice or educational support. Research shows that users exhibit a high level of trust towards avatars if they are managed by reputable companies. Their behavior and speech is assembled in real-time in response to questions asked. Using ChatGPT, for example, one will no longer be able to differentiate whether the expert communicating via the internet is ‘real’ or a virtual copy. With GPT-4, the interaction of users with the system through prompting rapidly increases the volume of personal data generated. As a result, the quality of behavioural cloning will improve with positive as well as negative application potential. With anyone being able to access tools for behavioural cloning, some ‘bad actors’ may maliciously use the tool to create revenge porn or manipulative videos of public officials making statements they never actually made. This not only invades the privacy of the individual but also raises various ethical issues as well as concerns about identity theft and copyrights. So far government regulators find it difficult to prevent the misuse of behavioural cloning while Big-Tech companies – despite their effort to remove malicious content – only partially succeed in correcting the problem.
What is Thought Cloning?
In a recently published research paper scientists at the University of British Columbia attempted to show the benefits of developing AI systems to think like humans. They propose a technique called ‘Thought Cloning’ which trains AI on thoughts and actions carried out at the same time. The hypothesis behind thought cloning is that if you train a model on actions and its corresponding thoughts, the model will also be able to generate and communicate the reasoning behind its actions. The novel idea of thought cloning is not just to clone human actions but also the thoughts humans have as they solve a specific problem. AI thought cloning agents are trained on datasets generated by humans as they narrate their thoughts watching YouTube videos, for example. Consider the prospect of a thought cloning agent that has both learned to think and act like humans in a variety of settings. These agents could become skilled at planning, reasoning and explaining their thinking to us because they have the unique ability to observe and communicate the thoughts of their own minds. Of course, there are also risks, similar to those that exist with language models trained for behavioural cloning. Entering biased information or socially unacceptable human thought might distort thought cloning and deliver incorrect results.
Research Results with Thought Cloning
By imitating human thinking with thought cloning an AI agent can become more capable, interpretable and safe. The results from the scientific study show that modelling thought cloning significantly outperforms behavior cloning. Moreover, it converges faster because it needs fewer training examples. The experimental model also shows that thought cloning outperforms behavior cloning in tasks that are very different from the model’s training examples. Thought cloning also enabled the researchers to better understand the behavior of the AI agent because for each step, the model produced its planning and reasoning in natural language. In fact, this interpretability feature enabled the researchers to investigate some of the model’s early errors during training and to quickly adjust their training regime to steer it in the right direction. Thought cloning is an interesting and promising new direction of AI research because it represents an extension to other activities that try to create embodied and multi-modal deep learning models such as Google’s PaLM-E and DeepMind’s Gato. One of the reasons why human intelligence is so much more robust than is showcased by current AI models is our ability to ingest and process different modalities of information at the same time. Moreover, thought cloning is not without challenges. The real world is messier, unpredictable and much more complex. Probably the biggest challenge is creating the training data. People do not necessarily narrate their actions when performing tasks of problem solving. Existing knowledge obviates the need to explicitly spell out our intentions. Human thought and behavior is fraught with implicit reasons that can not necessarily be explained in plain text. It remains to be seen how thought cloning would perform in a commercial environment, provided humans would share their thoughts beyond the current practice of statistical profiling. But as the paper’s authors state, it creates new avenues for scientific investigation in Artificial General Intelligence (AGI), AI-safety and behavioural interpretability.
The Problems with Thought Cloning
How would you feel if a company developed a thought clone that collects your thoughts in order to predict and manipulate your decisions and choices in real time? Today our willingness to provide personal data for the free and convenient use of search and social media is influencing our decision-making with increasing sophistication. Retired Harvard Professor Zuboff called this exchange of free services in return for individual and behavioural information ‘surveillance capitalism’. Yet going a step further, it seems unlikely that people would knowingly agree to provide data for thought cloning. A thought clone can be dangerous for a person’s privacy but might also be detrimental to his interests and ability to choose. Collating data to create a thought clone would allow companies to predict a consumer’s choices at a high level of accuracy, far beyond today’s ability to classify the consumer based on his behavioural profile. Moreover, a thought clone’s database would be continually updated in real-time, monitoring both the person’s changing views and behaviour, as well as the type of interactions that successfully affect his behaviour. In order to analyse the impact of thought clones, a discussion of the legal and ethical implications of thought cloning must address the principles underlying privacy and rights to personal data. Doing so, we enter the domain of human rights as adopted by the United Nations and most of its members. Transferring these rights to digital rights is one of AI-issues widely discussed, both politically as well as socially.
The AI-model developed by researchers at the University of British Columbia provides strong arguments for the application of thought cloning. Quality and cost of these AI-models are much better compared to the effort of applying conventional behavioral cloning to solve specific problems. Yet it is questionable that users would be willing to watch and narrate videos to support monetization of their thoughts. However, sometimes in the near future – automatically capturing users’ thoughts when watching videos with non-intrusive brain sensors – might indeed open a new approach to AI-computing. However, from an ethical point-of-view and its potential for misuse, such a technology should never be permitted to enter the market without strict government controls. Similar actions took place in 1989. As DNA and its power of replication and hereditary control of cellular activities was ready to be marketed (Designer Babies), medical authorities immediately stepped in to put a cap on its potential misuse.