Has Machine-Learning Become Alchemy?

Posted by Peter Rudin on 23. March 2018 in Essay


The Alchemist in Search of the Philosopher’s Stone discovers Phosphorus.
Painting by Joseph Wright, 18th century

Introduction

With the provocative title ‘Has Machine-Learning Become Alchemy’, Google’s Ali Rahimi, winner of the Test-of-Time award at the 2017 Conference on Neural Information Processing (NIPS), gave a critical assessment of the state-of-the-art in machine-learning.

According to Rahimi, machine-learning research and alchemy have a few things in common. Alchemists discovered metallurgy, glass-making, and various medications; while machine-learning researchers have managed to make machines that can beat human Go players, identify objects from pictures, and recognize human voices or translate text. The medieval art of alchemy was once believed capable of creating gold and even human immortality. However, their trial-and-error method was gradually abandoned after pioneers like Isaac Newton introduced the science of physics and chemistry in the 1700s. Rahimi believes that contemporary machine-learning models’

successes — which are mostly based on empirical methods — are plagued with the same issues as alchemy. The inner mechanisms of machine-learning models are so complex and opaque that researchers often don’t understand why a machine-learning model can output a specific response from a set of data inputs, an issue which is also referred to as the ‘black box problem’. Rahimi believes the lack of theoretical understanding or technical interpretability of machine-learning models is cause for concern, especially if AI takes responsibility for critical decision-making. “We are building systems that govern healthcare and mediate our civic dialogue. We can influence elections. However, I would like to live in a society whose systems are built on top of verifiable, rigorous, thorough knowledge, and not on alchemy,” said Rahimi. With his view Rahimi is not alone. There are increasing signs from the machine-learning community that AI needs another wave of innovation to reach the level of Artificial General Intelligence (AGI). This wave should include contextual intelligence beyond the current statistical, algorithmic interpretation of data.

From Common-Sense to Contextual Adaptation

Common-sense is sound practical judgment concerning everyday matters, or a basic ability to perceive, understand, and judge what is shared by (“common to”) nearly all people. The first type of common-sense, good sense, can be described as “the knack for seeing things as they are, and doing things as they ought to be done.” The second type is sometimes described as folk wisdom, “signifying unreflective knowledge not reliant on specialized training or deliberative thought.” The two types are intertwined, as the person who has common-sense is in touch with common-sense ideas, which emerge from the lived experiences of those common-sensical enough to perceive them.

The main problem that we face is bridging the semantic gap between common-sense and logic. Statistical systems which are built on deep neural networks can translate text, but these systems do not understand its context. The attempt to tackle this kind of problem is not new. Since the beginning of AI as an academic discipline in 1956, there have been serious attempts to include common-sense as part of AI solutions. For a machine to be considered truly intelligent, it must be able to reason, using the broad scope of information that humans are expected to possess. For example, a photo of a man carrying a horse on his shoulders may seem unusual to us, but for AI, which has never known or experimented with the abstract notion of the force of gravity, there is nothing absurd about this photo. According to Gary Marcus, professor of psychology and neural science at NYU, “the human brain is clearly capable of doing better than artificial intelligence, which today is primarily nurtured through deep learning. Although the brain uses techniques that are like deep learning for certain tasks, it is also capable of developing, recording and manipulating rules governing the way the world functions, so that it can draw conclusions even with limited experience”. A contextual model must combine perceiving and learning as one set of tasks, while abstracting and reasoning defines the other set. Implementing this model to enhance current machine-learning ‘narrow’ AI will eventually lead to an AGI which invokes Singularity.

Past and current efforts to solve an old problem

The trouble with common-sense thinking is that one cannot experiment with it until one has a big common-sense database. In 1985 Doug Lenat, at that time a young professor at Stanford, started to build-up a database called Cyc to classify and store common-sense knowledge. Since that time Lenat and his colleagues have been spending years feeding common-sense knowledge into Cyc. 33 years later, following a 2000 man-year effort, building a Cyc application still requires a significant engineering effort. The trouble is that Cyc is difficult to use, it is proprietary and so far, not much used by researchers.

At Imperial College London, Murray Shanahan and colleagues are working on a way around this problem using an old, unfashionable technique called symbolic AI. His idea is to combine symbolic AI with modern machine-learning. Symbolic AI never took off, because manually describing everything, quickly proved overwhelming. “Modern AI has the potential to overcome that problem by using neural networks, which learn their own representations of the world around them” says Shanahan.

According to a BBC News article which appeared in June 2016, Google is also working on a ‘common- sense AI engine’. “We are on the brink of a brand-new era of computing,” explained Emmanuel Mogenet, head of Google’s research lab in Zurich. Computers need to understand some obvious things about the world, so we want to build a common-sense database. Google has always been in the business of natural language because that is how people search, but we have never really understood the questions. We just match keywords with content and rank that content smartly. The next stage is to truly understand what people are asking”, he said.

Paul G. Allen’s Initiative

Paul G. Allen, Microsoft co-founder and philanthropist, announced February 28, 2018 that he is committing USD 125 million over three years supporting the Allen Institute for Artificial Intelligence (AI2) to launch Project Alexandria – a new research initiative on common-sense AI. “When I founded AI2, I wanted to expand the capabilities of artificial intelligence through high-impact research. Early in AI research, there was a great deal of focus on common-sense, but that work stalled. AI still lacks what most 10-year-olds possess: ordinary common-sense. This is an extremely complicated challenge,” said Mr. Allen. “If we want AI to approach human abilities and have the broadest possible impact in research, medicine and business, we need to fundamentally advance AI’s common-sense abilities. Project Alexandria will integrate knowledge developed by machine reading and reasoning (AI2’s Project Aristo), natural language and understanding (AI2’s Project Euclid), and computer vision (AI2’s Project Plato) to create a new, unified and extensive common-sense repository. This repository can then be used as a foundation for future AI systems to build upon.”

The path for adding common-sense to AI requires years of research and many checkpoints along the way. Major steps in the next few years include introducing standard measurements for the common-sense abilities of an AI system or developing novel crowdsourcing methods to acquire common sense knowledge from people at an unprecedented scale.
To advance their vision AI2 has come up with a new test called ARC, which stands for ‘AI2 Reasoning Challenge’. The purpose of the test is to support research in the design of enhanced AI systems that can analyse contextual relations. The test contains a dataset of about 8000 multiple-choice science questions typically used in grade-school level exams. Each question will require some understanding of common-sense, for example: ‘Which item is not made from a material grown in nature?’. The possible answers are a cotton shirt, a wooden chair, a plastic spoon and a grass basket. To answer this question requires a broad common-sense knowledge of the world, something which current machine-intelligence cannot provide.

Conclusion

“Common-sense is regarded as the holy grail of artificial intelligence. It represents one of the most fundamental and difficult problems for AI. It is the precondition for Artificial General Intelligence (AGI); until we get there, we will be stuck with narrow AI that is rarely robust and never as flexible as human reasoning. Consequently, I am hugely excited about Project Alexandria” says Gary Marcus from NYU. “There has been only one serious large-scale effort (Project Cyc by Doug Lenat) to endow machines with common-sense. It was launched over three decades ago and has, in the opinion of many experts, not reached maturity. The time is right for a fresh approach to the problem.”

 

Observing progress of AI over the last two years, it is encouraging to see that ‘critical thinking’ about the future direction of AI is growing. Life is not just an issue of algorithms and data. Algorithms are statistical tools with a potential of complexity that can escape human comprehension. The interpretation of massive data pools with algorithms might be biased or manipulated for commercial or political and social reasons. To apply common-sense as a check against misuse or misinterpretation is one way to assure that the advancement of AI continuous in a positive direction. Common-sense is a ‘by-product’ of our human evolution which started thousands of years ago. This evolution strives on one important asset which we should keep in mind as we proceed:

CREATIVITY

 

 

 

 

 

One Comment

  • Hallo Peter,
    another great essay, clearly you develop with each essay an essential and educative element and theme along
    your ‘conquest’ for the Human I- & AI grail ; ), thank you.
    Alchemy was in the past reserved to very few educated persons (at times of Goethe, Faust). The desire
    to seek some ultimate answers is probably one of the driving forces of humans. What changed in past years
    with Internet and Global knowledge (indexed/shared) is the scale, number of ‘alchemists’ who usually seek a rather disruptive
    business targets or simply some Nirvana state (probably both with its implicit contradiction).
    Whereby commonsense refers rather to ‘past common successful experiences’ to reapply in similar
    context (automatic or combined with rational observations and reflection).
    The very many wheels are spinning fast. Yes I hope and believe the positive changes by/for humanity will outweigh the negative fall-outs.
    greetings Hannes

Leave a Reply

Your email address will not be published. Required fields are marked *