With Curiosity Towards a New AI: The Issue of Learning

Posted by Peter Rudin on 26. August 2022 in Essay

Curiosity for Learning   Credit: gedankenwelt.de

What is Curiosity

According to Wikipedia curiosity is a major factor in human development. It fosters the process of learning and the desire to acquire knowledge and skill. Curiosity is common to human beings at all ages from infancy through adulthood. The motivation that drives curiosity is related to a passion for knowledge, information and understanding. Curiosity strongly relates to learning with insights provided by ongoing research in Psychology, Neuroscience and Artificial Intelligence (AI).

Curiosity Theories

Research of curiosity has intensified over the last couple of decades. The following lists some of the more popular theories, explaining the nature of curiosity:

Curiosity-drive theory

Curiosity-drive theory is related to the undesirable experiences of uncertainty. The elimination of these unpleasant feelings is rewarding. The general concept dictates that curiosity is developed strictly out of the desire to make sense of unfamiliar aspects of one’s environment through interaction.

Motivation and reward

The concepts of motivation and reward strongly relate to the notion of curiosity. The idea of reward is defined as the positive reinforcement of an action that encourages a particular behavior by using the emotional sensations of relief, pleasure, and satisfaction correlating with happiness.

Memory and learning

As curiosity drives the desire to learn from new or unfamiliar experiences, one’s memory is important in determining if the experience is indeed unfamiliar. Memory supports the process by which the brain can store and access information. If the experience encountered is not novel, the stimulus to learn fades away.

Curiosity and Learning

Most research of curiosity has been focused on adults with self-report measures, supported by neuroscientific methods such as the analysis of behaviour with fMRI or EEG. In contrast Jean Piaget – considered the most influential child researcher – conducted his research based on observation. He argued that babies and children are constantly trying to make sense of their reality and that this contributed to their intellectual development. According to Piaget, children develop hypotheses, conduct experiments and then reassess their hypotheses depending on what they observe. Researchers have also looked at the relationship between a child’s reaction to surprise and curiosity, suggesting that children are further motivated to learn when dealing with uncertainty. Related findings show that children structure their play in a way that reduces uncertainty and allows them to discover causal structures in the world. This work is in line with earlier theories of Piaget, asserting that the purpose of curiosity and play was to ‘construct knowledge’ through interactions with the world.

A Psychologist’s View

In the 1950s Daniel Berlyne was one of the first psychologists to offer a comprehensive model of curiosity. Building on Berlyne’s insights, George Loewenstein, Professor at Carnegie Mellon University, proposed the ‘information gap’ theory. He posited that people become curious upon realizing that they lack desired knowledge. This creates an aversive feeling of uncertainty, which compels them to uncover the missing information. Daniel Berlyne makes the distinction between perceptual and epistemic curiosity. Perceptual curiosity refers to the driving force that motivates organisms to seek out novel stimuli, which diminishes with continued exposure. According to Berlyne this represents the primary driver of exploratory behavior in non-human animals and human infants. Epistemic curiosity is described as a drive aimed not only at obtaining access to information for stimulation – capable of dispelling uncertainties of the moment – but also at acquiring knowledge. He defines epistemic curiosity as an activity predominantly exercised by humans because the quality of human curiosity differs significantly from that of other species. Another contemporary view of curiosity is that it is a special form of information-seeking, distinguished by the fact that it is internally motivated. By this view, curiosity is strictly an intrinsic drive, while information-seeking refers to a drive that can be either intrinsic or extrinsic.

The Neuroscientist’s View

According to ScienceDaily  researchers at the Netherlands Institute for Neuroscience only recently discovered a brain circuit that is engaged in curiosity and novelty seeking behavior. Using several innovative techniques, the scientists uncovered a path of multiple brain regions that converts curiosity into action. Curiosity is the motivational drive for exploring and investigating the unknown and making new discoveries. It is as essential and intrinsic for survival as hunger. Until recently, the brain mechanisms underlying curiosity and novelty-seeking behavior were unclear. Based on the results of this new research, curiosity, hunger and aggression drive three different goal-directed behaviors: novelty seeking, food eating and hunting. Their experiments conducted with mice, revealed different action sequences based on the motivational level of novelty-seeking. Moreover, the researchers  discovered that the newly discovered brain circuit engaged in curiosity and novelty-seeking connects excitatory neurons with inhibitory neurons. The activation of inhibitory neurons with excitatory neurons showed a dramatic increase in the arousal level and duration of interaction with novel, unfamiliar objects. The same correlation was found in studying Axons connecting the neurons. “It is the first time that this path of neural activity has been described. Now we can begin to understand, how curiosity sometimes wins over the urge for security, and why some individuals are more curious than others”, says one of the researchers involved in the study.

The AI View

Animals and humans exhibit learning abilities and understandings of the world that are far beyond the capabilities of current AI and machine learning (ML) systems. How is it possible for an adolescent to learn to drive a car with about 20 hours of practice and for children to learn language? Why do most humans know how to react in situations they have never encountered before? With AI, to produce reliable results, current ML systems need to be trained with enormous amounts of supervisory data, labelled by human experts going through millions of reinforcement-learning trials. After engineers have hardwired hundreds of behaviors into them, we still are miles away from the quality of human judgement. The answer may lie in the ability of humans and many animals to learn from so-called ‘world-models’, providing insights how the world works. “However, some researchers think it might be enough to take what we have and just grow the size of the dataset, the model size and computer speed  in order to simulate a bigger brain”, Turing Award Winner Bengio said in his opening remarks at the NeurIPS 2019 conference. This simple sentence succinctly represents one of the main problems of current AI research. Artificial Neural Networks (ANNs) have proven to be very efficient at detecting patterns from large sets of data and they can do it in a scalable way. Increasing the size of ANNs and training them on larger sets of annotated data will, in most cases, improve their accuracy. This characteristic has created a sort of ‘bigger is better, mentality, leading some AI researchers to seek improvements and breakthroughs by creating larger and larger AI models and datasets. “While size is a factor, we still do not have any neural network that matches the human brain’s 100-billion-neuron structure”. This is the reason why Bengio differentiates between, what he calls, a ‘system 1’ and a ‘system 2’ approach. “Imagine driving in a familiar neighbourhood. You can usually navigate the area subconsciously, using visual cues that you’ve seen hundreds of times. You don’t need to follow directions. You might even carry out a conversation with other passengers without focusing too much on your driving. But when you move to a new area, where you do not know the streets and the sights of the environment are new, you must focus more on the street signs, use maps and get help from other indicators to find your destination. The latter scenario is where your ‘system 2’ kicks into play. It helps humans generalize previously gained knowledge and experience in a conscious, explainable way”, Bengio said at the NeurIPS conference.

In a new paper titled ‘A Path Towards Autonomous Machine Intelligence’, NYU Professor Yann LeCun makes the point that humans and non-human animals can learn enormous amounts of background knowledge about how the world works through observation and a small number of interactions in a task-independent, unsupervised way. According to LeCun it can be hypothesized that this accumulated knowledge may constitute the basis for what is often called common sense. Common sense can be seen as a collection of models of the world that can tell what is likely, what is plausible and what is impossible. Using such world-models, animals and humans can learn new skills with very few trials. They can predict the consequences of their actions and they can reason and imagine new solutions to problems. Importantly, they can also avoid making dangerous mistakes when facing an unknown situation. A self-driving system for cars may require thousands of trials of reinforcement learning to learn that driving too fast in a turn will result in a bad outcome, and to learn to slow down to avoid skidding. By contrast, humans can draw on their intimate knowledge of intuitive physics to predict such outcomes, and largely avoid fatal courses of behaviour when learning a new skill. Common sense knowledge does not just allow humans and animals to predict future outcomes, but also to fill in missing information, whether temporally or spatially. It allows them to produce interpretations of percepts that are consistent with common sense. When faced with an ambiguous percept, common sense allows humans to dismiss interpretations that are not consistent with their internal world-model and to pay special attention when a dangerous situation presents an opportunity for creating a refined world-model.

Conclusion 

For many years now Artificial General Intelligence (AGI) has been the holy grail of AI-research with little or no progress for overcoming problems related to causality, for example. Based on a fundamentally new approach of self-learning systems, with curiosity and common sense as driver, we might finally achieve a level of  AI that serves humans as opposed to humans being misused for serving AI.

Leave a Reply

Your email address will not be published.