Picture Credit: Levtechconsulting.com
On May 22, 2019, the OECD’s 36 member countries including the US, formally adopted the first set of intergovernmental policy guidelines on Artificial Intelligence (AI), agreeing to uphold international standards to ensure that AI systems are designed to be robust, safe, fair and trustworthy. The OECD does not include China, and the principles outlined by the group seem to contrast with the way AI is being deployed there, especially as regards face recognition and surveillance of ethnic groups associated with political dissent. The five OECD Principles on AI read as follows:
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards—for example, enabling human intervention where necessary—to ensure a fair and just society.
- There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
- AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
- Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
As these principles do not address the issues involved with the market domination of a few high-tech giants, they do signal government’s realization that the ever-growing societal impact of AI requires guidelines and standards. While point 2 of the OECD principles confirms the commitment to our democratic values, point 3 is vital for the ongoing advancement towards Human-Centric AI. To augment humans with AI requires transparency and accountability. Humans must remain in control of machines and not the other way around. In that respect the OECD principles, agreed to by all member countries, represent a milestone in the short history of AI. Many initiatives, launched by private institutions and universities, have issued warnings about the potential dangers and threats of AI. The real danger lies not in sudden apocalypse, but in the gradual degradation and disappearance of what makes human experience and existence meaningful. Implementing the OECD principles of AI represents a major step to stop this degradation and to advance humanity in a beneficial and sustainable direction over decades to come.
Impact of implementing the OECD principles
It is up to each member country to transfer the OECD principles into its own legal framework. One can foresee, however, that countries determined to be competitive in the research and application of AI, need to resolve societal, individual as well as organizational challenges:
Societal challenges: Technology induced change is accelerating. A culture of life-long learning, starting at kindergarten all the way to retirement, will be a key differentiator to reap the benefits of AI. Point 2 of the OECD principles confirms the adherence to human rights. The right to education is reflected in Article 26 of the Universal Declaration of Human Rights which states: “Everyone has the right to education. Education shall be free, at least in the elementary and fundamental stages. Elementary education shall be compulsory. Technical and professional education shall be made generally available and higher education shall be equally accessible to all based on merit. Moreover, education shall be directed to the development of human personality”. AI supported personalized education provided by virtual teachers and avatars will improve educational efficiency and enhance student engagement. Educational institutions need to revamp their learning concepts and foremost educate and motivate teachers to support this change.
Individual challenges: It is the individual’s responsibility to assess and further develop his talents as part of his personality development. This implies that one knows what one wants in order to withstand the threat of being manipulated or brain-hacked serving the interests of others commercially, socially or politically. To be augmented by AI requires a basic understanding of AI. Like learning a language or understanding physics in secondary education, AI will provide tools adding value to one’s existing internet connectivity. Contrary to learning the fundamentals of a language, learning AI requires continuous updates for years to come, as science and technology slowly close the gap between machine intelligence and human intelligence. The motivation for life-long learning and the chance to work in new corporate environments expands one’s professional horizon and supports personality development towards a positive life-experience.
Organizational challenges: Having accomplished digital transformation by automating production-, communication- and development-processes, corporate success and sustainability will largely depend on how humans add creativity and innovation in a potentially zero-margin market. To support collective intelligence is outside the realm of today’s ‘narrow’ AI. Human-Centric AI, complementing machine learning with machine understanding, will add new scenarios to a corporation’s management team as younger team-members challenge the control of ‘outdated’ managers. Managing this generation-issue fairly provides one example of how human resource management – next to the management of technology and communication – will be key to corporate success.
The OECD principles as a foundation to move to next level of AI
As AI moves from machine knowledge to machine understanding, mimicking the human experience of learning, current bottlenecks of ‘narrow’ AI will gradually be resolved by Human-Centric AI. This process can be visualized by the so called DIKW (Data-Information-Knowledge-Wisdom) Pyramid:
Today’s AI machines have knowledge; however, they lack understanding. Deep learning algorithms coupled with massive data analytics have reached a high level of near-perfect language translation, yet machines do not understand the meaning of the text translated. The human brain does not work like a von Neumann computer and there are still many mysteries as to how the brain can perform learning tasks, memorizing and associating language and visual information or communicating and coordinating our body at 20 Watts of energy. To reduce the gap between human- and machine-intelligence, the development of machine common-sense represents the next major challenge to advance AI. Wikipedia defines common-sense as the basic ability to perceive, understand, and judge things that are shared by nearly all people and can reasonably be expected of nearly all people without need for debate. It is common sense that helps us quickly answer questions like, “can an elephant fit through a door or can a child have a doctorate degree in medicine?” The vast majority of common-sense is typically not expressed by humans because there is no need to state the obvious. There are a wide range of strategies that could be employed to make progress on this difficult challenge to develop machine common-sense, for example: a) provide a service that learns from experience like a child in order to construct computational models that mimic the core domains of cognition for objects (intuitive physics), human behaviour (intuitive psychology) and places (spatial navigation) or b) provide a service that learns from reading the Web, like a research librarian, to construct a common-sense knowledge repository capable of answering natural language and image‐based questions about common-sense phenomena. For more than 35 years, Doug Lenat, a researcher from Stanford, has been engaged in a project called ‘Cyc’ to codify, in machine-usable form, the millions of pieces of knowledge that compose human common sense. Meanwhile many renowned research institutions are working on tools to accelerate this process.
Degrading humanity towards manipulated, brain-hacked servants as the historian and philosopher Yuval Noah Harari is concerned about, strongly differentiates from the concept of Human-Centric AI as pursued by the initiatives of Stanford (HAI) and MIT (QUEST). So far AI has heavily contributed to the expansion of knowledge from various data sources, expanding human’s potential to advance science into new territories. However, our cultural heritage, accumulated over hundred thousand of years must be part of the equation should AI augment humans. Reducing humanity to mathematical algorithms will end-up in a dead-end street as stipulated by many science fiction movies and stories. The concept of Human-Centric AI enhances and empowers humans rather than replacing and controlling them. Core concerns are accountability, explainability, appropriate interaction concepts and the inclusion of values, ethics, and privacy. Hence the five OECD principles are fully aligned to the goals of Human-Centric AI, supporting the following scenarios:
- Instead of replacing humans we need to focus on enhancing human capabilities allowing people to improve their own performance and successfully handle more complex tasks.
- Instead of prescriptive systems telling people what to do we need to focus on systems that empower humans to make more informed decisions and help them harness and channel their creativity.
- Instead of creating unpredictable “black box” systems we need to focus on explainable, transparent, validated, and thus trustworthy systems optimally supporting both individuals and society in dealing with the increasing complexity of a networked, globalized world.
‘Wisdom’, which in terms of the DIKW Pyramid, represents the highest level of intelligence, strongly correlates with Maslow’s hierarchy of needs where ‘self-actualization’ is considered the highest level of human satisfaction. To reach this level stipulates a process of growing and developing as an individual. Human-Centric AI is designed to support and augment humans to reach this level. Adapting and implementing the OECD principles are the first steps to get there.