Clark Street junction and station 1913 Picture Credit: Chicago Transit Authority
One asset most humans have is to think and to self-reflect. In contrast one potential human weakness is to be influenced, serving the interest of others. Artificial Intelligence is used in both respects: for one it is advancing and accelerating science with machine generated knowledge, for the other it is profiling and supervising every individual to potentially influence his decision making, be it in respect to purchasing goods or to support a societal issue. The dominating high-tech companies like Facebook, Google or Amazon make huge profits by selling personalized information. They reinvest these profits into AI research to foster their market position further by attracting the best talents with compensation packages most traditional companies have difficulty to match. The annual report of Alphabet’s subsidiary DeepMind provides an insight into the effort they make to achieve their declared goal of becoming the global leader in Artificial General Intelligence (AGI).
Deep Mind, the world’s largest AI-research organisation
DeepMind, likely the world’s largest research-focused artificial intelligence operation, is losing money fast, more than USD 1 billion in the past three years. The amount of money is significant, perhaps more than any previous AI research operation has spent, but far from the sums spent in some of science’s largest projects. The Large Hadron Collider at CERN costs something like USD 1 billion per year and the total cost of discovering the Higgs Boson has been estimated at more than USD 10 billion. The big difference, however, relates to the fact that CERN is a global research institution owned by member states with research results shared by all members. Nevertheless, the rising magnitude of DeepMind’s losses is impressive: USD 154 million in 2016, USD 341 million in 2017 and USD 572 million in 2018. This steady increase raises the question if DeepMind is on the right track scientifically? Are investments of this magnitude sound from Alphabet’s perspective? DeepMind does not reveal how many people it employs, but its deepening costs are related to its ongoing hiring of top researchers worldwide. In late 2017, co-founder and CEO Demis Hassabis said DeepMind had doubled its headcount to 700 over the previous 12 months. According to LinkedIn, DeepMind presently employs 838 people. For the year ending in December 2018, DeepMind’s wage bill amounted to a staggering USD 488 million, or somewhere around an average of USD 581’000 per employee per year (including pensions and individual travel). So far, the lack of significant revenues and the high cost of hiring researchers is not dissuading DeepMind from expanding further in the UK: the company currently has around 55 vacancies in London.
DeepMind has been concentrating its efforts on a technique known as deep reinforcement learning. This technique combines deep learning, primarily used for recognizing patterns, with reinforcement learning, geared around learning based on reward signals, such as a score in a game or victory or defeat in a game like chess. DeepMind gave the technique its name in 2013, in a paper that showed how a single neural network system could be trained to play different Atari games as well as, or better than humans. The paper was presumably a key catalyst in DeepMind’s January 2014 sale to Google and subsequently to its holding company Alphabet. Further advances in deep reinforcement learning have fuelled DeepMind’s impressive victories in Go and the computer game StarCraft. In some ways, deep reinforcement learning applies massive memorization to achieve the optimum result. Systems that use it are capable of awesome feats, but they have only a shallow understanding of what they are doing. As a consequence, current systems lack flexibility, and thus are unable to compensate if the world changes, sometimes even in tiny ways. For now, deep reinforcement learning can only be trusted in environments that are well controlled. Neither the board nor the rules of GO have changed in 2,000 years—but one might not want to rely on the algorithms applied in winning GO when it comes to other real-world situations. Impressive as DeepMind’s achievements are, the unit seems to be far off its goal of becoming the world-leader in AGI. The extended leave of absence of its co-founder Mustafa Suleyman since March this year has raised many speculations about DeepMind’s future role in Alphabet’s AI strategy. It is an indication of the strain induced by its ambitious goal.
The case for universities and independent research organisations
Today AI is applied in many research areas: Neuroinformatics with deep-learning, Neurobiology with brain research and Neurophilosophy with human behavioural research. While DeepMind has hired researchers from different scientific disciplines, the question can be raised if a research organisation such as DeepMind, owned by a company with strong commercial interests, can really serve the public in the long run without getting engaged in issues like ethics or antitrust and market monopolizing issues. There are hundreds of universities and associated research organisations engaged in AI research. The salaries paid to their researchers in no way matches the salaries paid by Alphabet and its subsidiaries. However, the quest for knowledge is not just an issue of money. Creativity and critical thinking have a long tradition in academia. Interdisciplinary thinking fostered by multidisciplinary research make up a university’s mission to avoid the formation of non-transparent knowledge silos. The following chapter provides just one example how new approaches to AI can advance knowledge for the benefit of society at large.
Machine Behaviour, a new field in AI research
Iyad Rahwan, the director of the Center for Humans and Machines at the Max Planck Institute for Human Development, argues that the algorithms that underlie AI applications have grown so complex that we cannot always predict what they will do. According to Iyad Rahwan the best way to understand intelligent systems is to observe their behaviour. Rahwan has gathered 22 colleagues — from disciplines as diverse as robotics, computer science, sociology, cognitive psychology, evolutionary biology, artificial intelligence, anthropology and economics — to publish a paper in Nature calling for the inauguration of a new field of science called “machine behaviour.”
According to the paper there are three primary motivations for the scientific discipline of machine behaviour. First, various kinds of algorithms operate in our society, and algorithms have an ever-increasing role in our daily activities. Second, because of the complex properties of these algorithms and the environments in which they operate, some of their attributes and behaviours can be difficult or impossible to formalize analytically. Third, because of their ubiquity and complexity, predicting the effects of intelligent algorithms on humanity—whether positive or negative—poses a substantial challenge. Here are some examples relating to the issues of machine behaviour:
News ranking algorithms:
- Does the algorithm create filter bubbles?
- Does the algorithm disproportionally censor content?
- How aggressively does the car overtake other vehicles?
- How does the car distribute risks between passengers and other pedestrians?
- Do algorithms manipulate markets?
- Does the behaviour of the algorithm increase systemic risk of a market crash?
- Does the matching algorithm use facial features?
- Does the matching algorithm amplify or reduce homophily?
A machine behaviourist might study an AI-powered children’s toy, a news-ranking algorithm on a social media site, or a fleet of autonomous vehicles. But unlike the engineers who design and build these systems to optimize their performance according to internal specifications, a machine behaviourist observes them from the outside in — just as a field biologist studies flocking behaviour in birds, or a behavioural economist observes how people save money for retirement. In an interview with Quanta Magazine Rahwan makes the point that “The reason why I like the term ‘behavior’ is that it emphasizes that the most important thing is the observable, rather than the unobservable characteristics of intelligent systems”. He believes that studying machine behaviour is imperative for two reasons. For one thing, autonomous systems are invading more aspects of people’s lives all the time, affecting everything from individual credit scores to the rise of extremist politics. But at the same time, the “behavioural” outcomes of these systems — like flash crashes caused by financial trading algorithms, or the rapid spread of disinformation on social media sites — are difficult for us to anticipate by examining a machine’s code or construction alone.
It is vital that all individuals and institutions involved in AI research share their findings to uncover new potential interrelationships across different scientific fields. As a result, ‘Networked-AI’ will be a key driver to stipulate potential value generation based on new and open AI models, be it in personalized healthcare, environmental protection, personalized education, societal ethics or AI safety. The human asset to think and correlate ideas across various scientific boundaries remains a key element in human-centered AI. There will be many application areas where AI outperforms humans but the ability to self-reflect remains intact. AI machines have or will have some kind of self-awareness; however, it will always refer to specific scenarios based on the design implemented by human engineers. AI research will continue to provide new methods to apply knowledge in solving many of humanity’s problems; however, the ‘dream’ of attaining Artificial General Intelligence seems very far off. DeepMind might be forced to rethink its research strategy.