Picture Credit: Department of Computer Science – Swansea University
Introduction
Ethics or moral philosophy is a branch of philosophy that involves systematizing, defending, and recommending concepts of right and wrong conduct. As a branch of philosophy, ethics investigates the questions “What is the best way for people to live?” and “What actions are right or wrong in particular circumstances?” In practice, ethics seeks to resolve questions of human morality, by defining concepts such as good and evil, right and wrong, virtue and vice, justice and crime.
According to Wikipedia the three major areas of study within ethics recognized today are:
- Meta-ethics, concerning the theoretical meaning and reference of moral propositions, and how their truth values (if any) can be determined,
- Normative ethics, concerning the practical means of determining a moral course of action
- Applied ethics, concerning what a person is obligated (or permitted) to do in a specific situation or a particular domain of action
One of ethics’ big questions is: Are we born knowing the difference between good and evil? Or are we taught our moral beliefs by parents and society? Philosophers and psychologists have long believed that babies are born in “blank slates,” and that it is the role of parents and society to teach babies the difference between right and wrong; good and bad; mean and nice.
A team of researchers at Yale University’s Infant Cognition Center, known as The Baby Lab, as well as a growing number of researchers believe differently. Morality is not just something that people learn, argues Yale psychologist Paul Bloom: It is something we are all born with. The results of this research are not conclusive and the scientific debate is continuing.
There is wider agreement however, that ethical conduct can and should be taught by parents, by teachers and possibly by coaches in corporations.
In corporate settings ethical codes are adopted to assist members in understanding the difference between ‘right’ and ‘wrong’ and in applying that understanding to their decisions. An ethical code generally implies documents at three levels: codes of business ethics, codes of conduct for employees, and codes of professional practice. Some of these documents are also communicated to the shareholders and the public in general. Business magazines like Forbes periodically publish a rating-report of the most ethical companies typically rating:
- Honesty, Integrity, Trustworthiness, Loyalty, Fairness, Concern and Respect for Others, Law Abidingness, Commitment to Excellence, Reputation and Accountability.
How profoundly a corporation lives up to its ethical standards is not easy to trace. What looks good on paper might not always match reality.
Traditionally products and services are built on processes designed by human logic. The application of this logic can be fraudulent as it happened with the VW diesel exhaust system. A sensor recognizing that the car was being tested for emission exhaust changed the combustion management to meet the exhaust specifications at the cost of engine performance. As soon as the car left the testbed environment the performance specifications were met, however with exhaust pollution values way above what was legally accepted. Corporate code of ethics cannot prevent fraud. However the risk exists that its detection will cause massive financial and reputational damage as it is happening at VW.
Ethics and Artificial Intelligent Machines (AIMs)
The possibility of creating thinking machines and AIMs raises a host of ethical issues. These questions relate both to ensuring that such machines do not harm humans and other morally relevant beings. AIMs cover a wide range of applications. They can be part of a corporate IT infrastructure, part of a medical diagnostic system or part of a service robot for example.
AIMs are generating knowledge by accessing massive data pools (big data) in combination with the execution of various types of mathematical algorithms. In contrast to traditionally programmed systems AIMs ‘learn’ through extensive computational trial and error iterations and provide a self-generated output as answer to the problem defined by the input. These so-called ‘Machine Learning’ applications are used for human decision guidance or machine based decision making without human intervention.
One of the problems we face relates to the fact that the logic behind the output produced by AIMs cannot always be traced or verified. For example, who or what is responsible in the case of an accident due to system error, or due to design flaws, or due to proper operation outside of anticipated constraints? Finally, as AIMs become increasingly intelligent, there seems some legitimate concern over the potential for AIMs to manage human systems according to AI values, rather than as directly programmed by human designers.
As humans are applying ethical standards to their decision making processes it seems reasonable to implement a set of equivalent ethical ground rules or algorithms within the software of AIMs.
It will become increasingly important to develop AI algorithms that are not just powerful and scalable, but also transparent to inspection—to name one of many socially important properties. When AI algorithms take on cognitive work with social dimensions such as recognizing faces —cognitive tasks previously performed by humans—the AI algorithm has to inherit the social requirements.
Moreover AI algorithms must be robust against manipulation. A machine vision system to scan airline luggage for bombs must be protected against human adversaries deliberately searching for exploitable flaws in the algorithm.
Responsibility, transparency, auditability, incorruptibility, predictability: all criteria that apply to humans performing social functions must also be considered in an algorithm intended to replace human judgment of social functions.
With an ethical code properly implemented as a binding algorithm in AI software, one could argue, that AIMs are less subject to fraud compared to current corporate practices where ethical codes are just documented on paper.
Nevertheless fear is widespread that super intelligent AIMs might become uncontrollable, threatening the values of our very human existence. To settle these concerns the AI community has to enter a dialogue such that the educated public can understand and comprehend the conception and application AIMs. As a positive note a number of initiatives have been announced this year to investigate and enhance the ethics of AIMs:
- The recently launched IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Their mission statement is to ensure that every technologist is educated, trained, and empowered to prioritize ethical considerations in the design and development of autonomous and intelligent systems.
- Carnegie Mellon University has announced that it will create a research center that focuses on the ethics of artificial intelligence. The ethics center, called the K&L Gates Endowment for Ethics and Computational Technologies, is being established at a time of growing international concern about the impact of AI technologies. The new center is being created with a $10 million gift from K&L Gates, an international law firm headquartered in Pittsburgh.
- Earlier this year, the U.S. White House held a series of workshops around the country to discuss the impact of AI, and in October the Obama administration released a report on its possible consequences.
- In September, five large technology firms – Amazon, Facebook, Google, IBM and Microsoft – created a partnership to help establish ethical guidelines for the design and deployment of AI systems.
The discussion about trust in AIMs versus trust in humans will very much impact the proliferation of applications that can be realized with AI software. The decision where to rely on humans and where to rely on machines will have far reaching social and economic implications. It seems reasonable to assume that eventually some government controlled quality standards of ethics will become part of the AI-ecosystem in order to advance the potential benefits of AIMs.