AI and the Challenges ahead, a critical Assessment

Posted by Peter Rudin on 27. December 2019 in Essay

Picture Credit: Asia Pacific Foundation of Canada

Introduction

Artificial Intelligence (AI) has received enormous media attention in the past year, covering issues such as:

Disruption of traditional business sectors, for example in finance or manufacturing
Establishment of a broad Start-Up culture with massive venture capital funding
Focus on STEM education (Science, Technology, Engineering and Mathematics)
Global AI research competition to reach societal AI supremacy
Heavy focus on machine-learning and deep neural networks
Ethical concerns about the societal impact of AI

Analysing this development in more depth, one can conclude that in some cases too much hype is raising unrealistic expectations, possibly resulting in future stagnation and severe financial losses:

  • Some Start-Ups are focused on convenience, providing apps with a lot of IT but little adherence to AI. Their potential success is related to marketing, following a risky ‘winner-takes-it-all’ strategy with heavy funding needed to reach market dominance.
  • To apply machine-learning requires significant engineering efforts, involving human talents that are scarce and costly. Only well-funded corporations can support the necessary research effort to incorporate and maintain AI in their ongoing business processes.
  • Due to incidents of massive data-theft or the delivery of fake-information, the request for more government regulation to solve trust, ethical and human-rights issues has become widespread.
  • The monopolistic market dominance of a few high-tech companies stifles competition and innovation in the long run, possibly resulting in a global economic downturn.

Despite these difficulties, it is widely accepted that machine-learning is making significant contributions to advance scientific research across many academic disciplines. Analysing the massive on-fall of test-data with machine intelligence has accelerated the engineering effort to develop new products and services which add significant economic value to our society. Most of this research is conducted by top universities relying on government funding and donations from the private sector. However, this scenario is changing as established companies are ‘raiding’ the universities for the best talents with far more attractive compensation packages, causing an AI brain-drain.

Consequences of the AI brain-drain

Designing neural networks is fundamentally different from other forms of programming. Discovering and developing new applications for them is often more akin to scientific research than traditional software development. The sudden rise in popularity of deep-learning has created a surge in the demand for AI researchers and scientists and as in any field where supply does not meet demand, those who have stronger financial resources have a distinct advantage in attracting the best talents. In the past years, wealthy tech companies and research labs such as Google, Facebook and OpenAI have been using huge salaries, stock options and other bonuses to lure AI scientists away from academic institutions. A recent study by researchers at the University of Rochester has found that, over the last 15 years, 153 artificial intelligence professors of American and Canadian universities have left their posts for opportunities in the commercial sector. The trend has been growing in the past few years, with 41 professors making the move in 2018 alone. While handsome salaries play a large role in drawing AI professors and researchers away from universities, they are not the only factor contributing to the AI brain-drain. Scientists also face a cost problem when working on AI research projects. Some areas of AI research require access to huge amounts of data and compute resources. This is especially true of reinforcement-learning, an important area of AI research, for example in the development of robotics, game bots, resource management and recommendation systems. The development and computation costs of training reinforcement-learning AI models can easily reach millions of dollars.

With more and more scientists and researchers moving to the commercial sector, universities will have a hard time in hiring qualified professors to train the next generation of AI scientists. This will further widen the AI skills gap to the point where publicly funded research will be replaced by new business models. With ‘research as a business’, operated by privately owned companies, licensing of AI-modules will reduce the cost of AI product and service development, disrupting today’s traditional role of universities engaged in research activities.

The Problem with Machine Learning

While educational institutions are overwhelmed by an onslaught of new students reaching for a degree in machine-learning, there is growing concern among members of the AI community that machine-learning and deep neural networks (DNNs) are flawed with severe problems, for example:

  • A self-driving car approaches a stop sign, but instead of slowing down, it accelerates into the busy intersection. An analysis reveals that four small rectangles, stuck to the face of the stop sign, can fool the car’s onboard AI system into misreading the word ‘stop’ as ‘speed limit 45’.
  • Researchers have demonstrated how to deceive facial-recognition systems by sticking a printed pattern on glasses or hats and that speech-recognition systems can be tricked into hearing phantom phrases by inserting patterns of noise in the audio.
  • Pixels maliciously added to medical scans can fool a DNN into wrongly detecting cancer as reported by one study conducted this year.
  • A neural network successfully classifies a picture as banana. But it is possible to create a generative adversarial network that can fool the DNN. By adding a slight amount of noise or another image besides the banana, the DNN might now think the picture of a banana is a toaster.

These are just a few examples of how easy it is to break this leading AI pattern-recognition technology. DNNs have proved to be very successful at correctly classifying all kinds of input, including images, speech and data on consumer preferences. They are part of daily life, running everything from automated telephone systems to user recommendations on the streaming service Netflix. Yet making alterations to inputs — in the form of tiny changes that are typically imperceptible to humans — can confuse the best neural networks around. “There are no fixes for the fundamental brittleness of deep neural networks,” argues François Chollet, an AI engineer at Google in Mountain View, California. To move beyond the flaws, he and others say, researchers need to augment pattern matching DNNs with extra abilities: for instance, making AIs that can explore the world for themselves, write their own code and retain memories. These kinds of system will, some experts think, form the story of the coming decade in AI research.

New frontiers in AI

Thinking that artificial intelligence works in the same way as a human brain can be misleading and even dangerous, says a recent paper titled ‘Minds and Machines’ by David Watson of the Oxford Internet Institute and the Alan Touring Institute. “No doubt, DNN algorithms are powerful, but to think that they ‘think and learn’ in the same way as humans do would be incorrect”, Watson says. Neural nets are ‘myopic’. They can see the trees, so to speak, but not the forest. For example, a human can say ‘that cloud looks like a dog’, whereas a DNN would say that the cloud is a dog. “It would be a mistake to say that these algorithms recreate human intelligence”, Watson says. “Instead, they introduce some new mode of inference that outperforms us in some ways and falls short in others.”

Deep-learning, the current dominant AI technique, owes its success in large part to an abundance in data and computing resources. Most advances in the field are associated with creating bigger neural networks and training them with more and more data. But the current excitement pouring more data and compute power into deep learning models has blinded most research to one of the fundamental problems that AI technology still suffers from: causality. ‘The Book of Why: The New Science of Cause and Effect’, written by award-winning computer scientist Judea Pearl and science writer Dana Mackenzie, delves into this topic. In his book, Pearl discusses the need to move past data-centric approaches and to embed AI algorithms with the capability to find causes. This could be the one thing that stands between current AI and human intelligence, the power to ask questions and to look for answers. Data do not understand causes and effects; humans do. Without causal models, AI algorithms will never get us closer to replicating human intelligence. Our causal intuition alone is usually enough for handling the kind of uncertainty we find in household routines or in our professional lives. “While awareness of the need for a causal model has grown, many researchers in artificial intelligence would like to skip the hard step of constructing or acquiring a causal model and rely solely on data for all cognitive tasks,” Pearl notes. Adding causal explanations, not dry facts, make up the bulk of our knowledge, and should be the cornerstone of machine intelligence. With that in mind, Pearl introduces the ‘ladder of causation’, a three-level model to evaluate the intelligence of living or artificial systems:

  • seeing is everything you can learn from observation alone. These are the kind of correlations you can find from the data you collect from the world. This is the model we share with animals.
  • doing is the things we learn by going beyond observation and intervening. This involves performing experiments, controlling for specific variables, and drawing conclusions from the results.
  • Imagining is the causal model of modern humans, the ability to think about counterfactuals and imagine alternate worlds.

“If I could sum up the message of this book in one pithy phrase, it would be that you are smarter than your data. Data do not understand causes and effects; humans do,” Pearl writes.

Conclusion

AI is research in progress. ‘Throw away your textbooks and start from scratch’ has been suggested by AI heavyweights like Geoffrey Hinton or Stuart Russell as they are researching new ways to remove the current limitations of DNNs. Progress in AI is likely to disrupt the disruptions we just have digested. Business economics and value generation are likely to fuel this development further as humans are increasingly challenged to adapt to a paradigm shift the outcome of which is hard to predict.

One Comment

  • one critics to Pearl‘s notes, I would have various more, but an interesting question surfaces: If seeing, doing, imagining are important concepts, how does it come that almost blind ants, essentially seeing very poorly or nothing, equipped with little imagination ability are doing the right thing? Do they really have a concept of actio -> reactio? I doubt that and evidence speaks strongly against it. See eg: https://www.amazon.de/Ants-Bert-H%C3%B6lldobler/dp/3540520929 et al

Leave a Reply

Your email address will not be published. Required fields are marked *