The Stanford AI100 Report: Is AI at an Inflection Point?

Posted by Peter Rudin on 22. October 2021 in Essay

Inflection Point                      Picture Credit: sidetracked.com 

Introduction

The Stanford New 2021 AI100 Report.pdf, titled ‘Gathering Strength-Gathering Storms’, is the second in a series following the inaugural AI100 report published five years ago in September 2016. Stanford plans to continue to publish an update to the report once every five years for a hundred years or longer. In addition to influencing researchers and guiding decisions in industry and governments, the report aims to provide the general public with a scientifically and technologically accurate portrayal of the current state of AI and its future potential. The following is an attempt to define why a highly reputable scientific community, authoring an eighty-two pages report, comes to the conclusion that AI is at an inflection point, implicating that the future of AI has reached a critical state. “Whereas AI- research has traditionally been the purview of computer scientists and researchers studying cognitive processes, it has become clear that all areas of human inquiry, especially the social sciences, need to be included in a broader conversation about the future of the field,” the researchers concluded. The following is an attempt to summarize the key messages, quoting selected textual passages of the report. The essay ends with a critical assessment regarding the future of AI and the report.

 ‘Gathering Strength’

AI technologies that augment human capabilities can be very valuable in situations where humans and AI have complementary strengths. An AI-system might be better at synthesizing available data and making decisions in well-characterized parts of a problem, while a human may be better at understanding the implications of the data. AI-software can function autonomously, which is helpful when large amounts of data need to be examined and acted upon. In most cases, the main factors holding back these applications are not in the algorithms themselves, but in the collection and organization of appropriate data and the effective integration of these algorithms into their broader sociotechnical systems. In the last five years, the field of AI has made major progress in almost all of its standard sub-areas, including natural language processing, image and video generation, decision-making and integration of vision and motor control for robotics. In addition, major shifts in the understanding of human intelligence have favoured the following topics:

Collective intelligence – the view that intelligence is a property not only of individuals, but also of collectives.

Cognitive neuroscience – studying how the brain’s hardware is involved in implementing psychological and social processes.  

Computational modelling – applying machine-learning and other cognitive activities such as planning or predicting.

However, despite progress in these domains, the nature of consciousness and how people integrate information from multiple modalities, multiple senses and multiple sources remains largely a mystery.

‘Gathering Storms’

As AI-systems prove to be increasingly beneficial in real-world applications they have broadened their reach, increasing the risks of misuse. One of the most significant dangers of AI is ‘techno-solutionism’, the view that AI can be seen as a panacea when it is merely a tool. There is an aura of neutrality and impartiality associated with AI-based decision-making, resulting in systems being accepted as objective even though they may be the result of biased historical decisions or even blatant discrimination. Without transparency concerning either the data or the AI-algorithms that interpret it, the public may be left in the dark as to how decisions that impact their lives are being made. AI-systems are being used as service of disinformation, giving them the potential to become a threat to democracy and a tool for fascism. Insufficient thought given to the human factors of AI-integration has led to an oscillation between mistrust of the system and over-reliance on the system. AI-algorithms are playing a role in decisions concerning healthcare such as the application of vaccines or how to deal with cancer or loss of memory. However, an overhyped presentation of AI’s capability tends to cloud the issue that a ‘storm’ is indeed approaching. Only recently opinions from the AI- research community have emerged that Artificial General Intelligence (AGI) – considered the holy grail of AI – might be a myth as seen in science-fiction and its distorted view of reality. Biased and distorted data used for AI-supported decision-making raises societal and ethical concerns, which so far outpace the measures taken by governments, especially in respect to privacy- and data-protection issues.

Is Artificial General Intelligence a Myth?

Recent research efforts make AI-systems more general by enabling them to learn from a small number of examples, learn multiple tasks in a continual way without inter-task interference and learn in a self-supervised or intrinsically motivated way. While these approaches have shown promise in several specialized domains, an important missing ingredient, long sought in the AI-community, is common sense. The informal notion of common sense includes several components of general intelligence that humans mostly take for granted, including a vast amount of mostly unconscious knowledge about the world, an understanding of causality and an ability to perceive abstract similarities between situations. AI-systems need to learn causal models and intuitive physics, describing our everyday experience of how objects move and interact, as well as to give them abilities for abstraction and analogy. Despite recent advancements, their performance is inadequate compared to human’s capabilities. Without being more tightly coupled to the physical world, AI-systems might never achieve common sense. Another important source of generality in natural intelligence is knowledge about cause and effect. Current machine-learning techniques are capable of discovering hidden patterns in data which allow the systems to solve ever-increasing varieties of problems. Neural network transformer models like GPT3, for example, built on the capacity to predict words in sequence, display tremendous capacity to correct grammar, answer natural language questions, write computer code, translate languages and summarize complex or extended specialized texts. Today’s machine-learning models, however, have only limited capacity to discover causal knowledge of the world. They are limited in predicting as to how novel interventions might change the world they are interacting with, or how an environment might have evolved differently under different conditions. To create systems significantly more powerful than those in use today, we need to teach them to understand causal relationships.

Societal and Ethical Issues

We are confronted with the fact that AI-systems are being used in the service of disinformation with the potential to become a threat to democracy and a tool for fascism. From deepfake videos to online bots manipulating public discourse and spreading fake news, there is the danger of AI-systems undermining social trust. The technology can be co-opted by criminals, ideological extremists or simply special interest groups, manipulating people for economic gain or political advantage. Disinformation poses serious threats as it effectively changes and manipulates evidence to create social feedback loops that undermine any sense of objective truth. The debates about what is real quickly evolve into debates about who gets to decide what is real, resulting in renegotiations of power structures that often serve entrenched interests. New predictive technologies may demand new public-governance practices. Alongside the production of new technical systems, we need to consider what organizational and policy measures should be put in place to govern the use of such systems in the public sector. New proposals in both the US and the European Union exemplify some potential approaches to AI-regulation. Appropriate measures may include establishing policies that govern data use—determining how data is shared or retained, whether it can be publicly accessed, and the uses to which it may be put. Some researchers have proposed implementing algorithmic impact assessments akin to environmental impact assessments. Business interests, however, have so far prevented any effective self-regulation. Matters are further complicated by questions about jurisdiction and the imposition of algorithmic objectives at a state or regional level that are inconsistent with the goals held by local decision-makers.

The Report’s Conclusion

Although the current state of AI technology is still far short of the field’s founding aspiration of recreating full human-like intelligence in machines, research and development teams are leveraging these advances and incorporating them into society-focused applications. For example, the use of AI techniques in healthcare and the application of brain sciences are both a beneficiary of and a contributor to AI advances. The field’s successes, however, have led to an inflection point to think seriously about the downsides and risks that the broad application of AI is revealing. The increasing capacity to automate decisions is a double-edged sword; intentional deepfakes or simply unaccountable algorithms making mission-critical recommendations can result in people being misled, discriminated against and even physically harmed. Algorithms trained on historical data are disposed to reinforce and even exacerbate existing biases and inequalities. Whereas AI-research has traditionally been the purview of computer scientists and researchers studying cognitive processes, it has become clear that all areas of human inquiry, especially the social sciences, need to be included in a broader conversation about the future of the field. Governments play a critical role in shaping the development and application of AI, and they have been rapidly adjusting to acknowledge the importance of the technology to science, economics and the process of governing itself. But government institutions are still behind the curve. Our strength as a species comes from our ability to work together and accomplish more than any of us could alone. AI needs to be incorporated into that community-wide system, with clear lines of communication between human and automated decisionmakers. At the end of the day, the success of the field will be measured by how it has empowered all people, not by how efficiently machines devalue the very people we are trying to help.

Adding a Personal Note 

The report’s conclusion that AI has reached an inflection point is extremely well documented, but its ‘polite’ message does not reflect the urgency of actions required. The exponentially rising complexity of AI’s impact in our daily lives and the need for corrective action cannot be dealt with in five year-intervals. The report implicates that ‘Health’ is likely to become a key benchmark of human’s ability in dealing with AI-technology. A new generation of real-time sensors and an enormous computational capacity in dealing with massive health-related data can address both physical as well as mental issues. The latter is likely the inflection point of our capacity to adapt to AI’s potential benefits – accessible by all regardless of social standing – against a threatening scenario of monopolistic practices, led by a few high-tech companies following their own agenda of power and profits.

One Comment

  • Hello Peter,
    very relevant essay and excellent elaboration of the report information which is much chopped in chapters (seems they wanted to flag off all topics/themes rather isolated) and conclude politely as you mention.
    Probably they neither put enough attention and information on China’s huge progress (‘role-model’) and a kind of US governmental deadlock this causes to moderate, control AI of the US giant companies (would give China a free ride, AI being already ‘spiced’ in many products and services would lead to full market dominance). Some very creative, smart global governance may be required (today very complex to define, enforce, retain ).
    Your bi-weekly essays represent an intellectual and human engaged high quality lifeline and vision during complex years, many thanks. Best greetings
    Hannes

Leave a Reply

Your email address will not be published. Required fields are marked *