AI for Automated or Augmented Decision-Making

Posted by Peter Rudin on 29. December 2017 in Essay

Picture Credit: salesforce.com

Introduction 

In 1983, during a period of high Cold War tensions, Soviet information systems abruptly sounded an alert that warned of five incoming nuclear missiles from the United States. A lieutenant colonel of the Soviet Air Defense Forces, Stanislav Petrov, faced a difficult decision: Should he authorize a retaliatory attack? Fortunately, Petrov chose to question the system’s recommendation. Instead of approving the retaliation, he decided that a real attack was unlikely based on several outside factors — one of which was the small number of “missiles” reported by the system — and moreover, even if it was real, he didn’t want to be the one to complete the destruction of the planet. Petrov’s decision overrode the systems recommendation which was based on faulty sensor information.

Another scenario, quite similar in respect to the potential of destruction played out on October 27, 1962, during the Cuban Missile Crisis. Eleven US Navy destroyers and the aircraft carrier USS Randolph had cornered the Soviet Submarine B-59 near Cuba, in international waters outside the US “quarantine” area. The crew had no contact with Moscow for days and didn’t know whether World War III had already begun. Then the Americans started dropping small depth charges at them which, unknown to the crew, were merely meant to force the sub to surface and leave. What the Americans did not know was that the B-59 crew had a nuclear torpedo that they were authorized to launch without clearing it with Moscow. As the depth charges intensified and temperatures onboard climbed above 45ºC many crew members fainted from carbon dioxide poisoning. Amid this panic, Captain Savitsky decided to launch their nuclear weapon. The warhead of this missile had roughly the power of the Hiroshima bomb. The decision to launch had to be authorized by three officers on board. One of the officers, 34-year old Vasili Arkhipov, was against the launch of the torpedo. He remained calm and convinced Captain Savitsky that the depth charges were signals for the Soviet submarine to surface. The sub surfaced safely and headed north, back to the Soviet Union. As in the case of the 1983 incident, wrong information or misinterpretation of events had almost caused a nuclear war.

Both incidents illustrate the potentially devastating consequences of wrong decisions. What kind of decision-making should be delegated to intelligent machines is one of the hot topics in the AI community. The aim is to improve decision-making in terms of quality by reducing associated risks and to improve the economics of deciding.

Decision Theory

Decision theory (or the theory of choice) is the study of the reasoning underlying the choices we have. It can be broken into two branches: normative decision theory, which gives advice on how to make the best decisions, given a set of uncertain beliefs and a set of values; and descriptive decision theory, which analyzes how existing, possibly irrational arguments lead to a decision. 

Normative decision theory is concerned with identifying the best decision to make, modelling an ideal decision maker who can compute with perfect accuracy and is fully rational. The practical application of this prescriptive approach (how people ought to make decisions) is called decision analysis, and is aimed at finding tools, methodologies and software (decision support systems) to help people make better decisions.

Descriptive decision theory is concerned with describing observed behaviours under the assumption that the decision-making individuals are behaving according to some consistent rules. In recent decades, there has been increasing interest in what is sometimes called “behavioural decision theory” and this has contributed to a re-evaluation of what rational decision-making requires. The work of Maurice Allais and Daniel Ellsberg showed that human behaviour has systematic and sometimes important departures from expected-utility maximization. The prospect theory of Daniel Kahneman (Bestseller: Thinking, Fast and Slow) and Amos Tversky renewed the empirical study of economic behaviour with less emphasis on rationality. Kahneman and Tversky found three regularities in actual human decision-making: (a) “losses loom larger than gains”; (b) persons focus more on changes in their present situation than they focus on radically changing it; and (c) the estimation of subjective probabilities is severely biased by ‘fixed, subjective ideas’.

Some decisions are difficult because of the need to consider how other people in the situation will respond to the decision that is taken. The analysis of such social decisions is more often treated under the label of game theory, rather than decision theory, though it involves the same mathematical methods. From the standpoint of game theory most of the problems treated in decision theory are one-player games (or the one player is viewed as playing against an impersonal background situation).

AI and decision-making

Artificial intelligence is more than the simple automation of existing processes: it involves, to greater or lesser degrees, setting an outcome and letting a computer program find its own way to get there. It is this creative capacity that gives artificial intelligence its power. But it also challenges some of our assumptions about the role of computers and our relationship to them. Current AI’s potential contribution to decision-making is based on machine learning utilizing: (a) the ability to process huge amounts of data with neural networks and (b) to extract knowledge from this data matching or surpassing existing human knowledge.

Machine learning approaches, however, are not restricted to producing a single prediction from given inputs. Many algorithms produce probabilistic outputs, offering a range of likely predictions with associated estimates of uncertainty. In the case of more complex machine learning systems (such as deep learning) there are many layers of statistical operations between the input and output data. These operations have been defined by algorithms, rather than a person. Because of this, not only is the output probabilistic, as with simpler algorithms, but the process that led to it might not be explained in human understandable terms.

There are two fundamentally different approaches to AI supported decision-making: (a) automated decisions without human intervention and (b) augmented decisions where humans remain the deciding authority implementing a decision.

Automated decisions

While the decision-making ability of AI has the potential to be of huge benefit to humans, relieving us of the burden of having to make certain decisions ourselves, it can also be economically beneficial, reducing the manpower involved in decision-making. For example, pattern recognition – that is learning from real-time data as a basis for decision-making – has significant potential in manufacturing applications such as process control, quality tracking or robot monitoring.

The key to automated decision-making is the quality and trustworthiness of data with little chance of human manipulation. Consequently, any real-time data monitoring and/or sensing application is a candidate for automatic decision-making. Automated (not augmented) driving or precision farming are typical examples of automated decision-making. We are all aware that systems can produce erroneous results due to hardware or software failures as it happened in the 1983 missile incident that almost caused an atomic war. A ‘kill-switch’ to allow human intervention to override an automated decision is required. As experts point out, intelligent machines might eventually outsmart humans and disable the ‘kill-switch’. The current ethics debate is focused on the question how to avoid this ‘science-fiction’ scenario. The issues are complex. For example, proponents of automatic driving make the case that the potential accident rate due to a hardware/software failure is significantly lower than the rate of accidents caused by human driving. Full deployment of automated driving is required to substantiate this claim.

Augmented decisions

In augmented decision-making humans take on the role of the ‘kill-switch’. Humans decide if a machine-based recommendation should be executed, modified or abandoned. Consequently, augmented decision-making requires human expertise and knowledge to successfully manage intelligent machines. As a result, combining AI with human intelligence, should improve the quality of decision making. According to the ‘descriptive decision theory’, however, the human part of the decision-making process can include irrational behavior which might or might not improve the quality of the decision.

For example, providing analytical support to doctors, the system response for augmented decision-making is usually coupled with a probability rating about the causes of a patient’s health problem. As this recommendation is based on historic medical records, the quality of these records is as critical as the transparency of the algorithms delivering the recommendation. Despite the enormous effort IBM has made with its Watson cognitive computing service to enter the health market, there is still an ongoing debate about the value of augmented decision-making within the medical community.

Cleaning up available data to remove biases or ‘fake’ information coupled with transparency how machine-learning algorithms arrive at a specific conclusion, is also part of the ongoing ethics debate regarding the use of AI in decision-making.

Collective intelligence coupled with augmented decision-making represents one other way to improve the augmented decision-making process. In the 1962 incident of the Cuban missile crisis, a policy of collective, unanimous decision-making prevented a nuclear disaster as one out of three submarine officers interpreted the available information (detonation of depth charges) differently and disagreed with launching the atomic warhead.

Conclusion

As Singularity arrives, AI as a common utility will provide as much IQ as we want but no more than we need. AI will enliven inert objects, very much like what electricity did more than a century ago. Everything that we formerly electrified will now be enhanced with cognition and connected through IoT (Internet of Things) networks. This new utilitarian AI will augment us individually as people (deepening our memory, speeding our cognition) and collectively as a species. In a corporate and organizational context, assessing the risks will guide us in answering the question if we apply augmented or automatic decision-making to solve a specific problem. Generally, we can expect that AI supported decision-making will lead to better decisions, provided that the data used by machine learning is trustworthy and ethical standards are met. To define these standards and to implement the controls of adherence are fundamental for reaping its potential benefits.

Leave a Reply

Your email address will not be published. Required fields are marked *