THINK sign that IBM distributed to every employee to be put on his desk as a sign of a corporate culture from 1948 on.
Over the last 40 years several thought-models have been created to describe how humans make decisions. Common to most of them is the view that humans do not just follow rationality to make decisions. In psychology, decision-making is regarded as the cognitive thought process resulting in the selection of a belief or a course of action among several alternative possibilities. Some decisions are difficult because of the need to take into account how other people in the situation will respond to the decision that is taken. Other areas of decision theory are concerned with decisions that are difficult simply because of their complexity, or the complexity of the organization that has to make them.
Behavioral economics sometimes also referred to as Neuro Economics (NE) uses psychological experimentation to develop theories about human decision making and has identified a range of biases as a result of the way people think and feel. NE is trying to change the way economists think about people’s perceptions of value and expressed preferences. According to NE, people are not always self-interested, benefits maximizing, and costs minimizing individuals with stable preferences—our thinking is subject to insufficient knowledge, feedback, and processing capability, which often involves uncertainty and is affected by the context in which we make decisions. Most of our choices are not the result of careful deliberation. We are influenced by readily available information in memory, automatically generated affect, and salient information in the environment. We also live in the moment, in that we tend to resist change, are poor predictors of future behavior, subject to distorted memory, and affected by physiological and emotional states. Finally, we are social animals with social preferences, such as those expressed in trust, reciprocity and fairness; we are susceptible to social norms and a need for self-consistency.
‘Thinking, Fast and Slow’, the best-selling 2011 book by Nobel Prize in Economics laureate and psychologist Daniel Kahneman summarizes research that he conducted over decades, often in collaboration with Amos Tversky.
Kahneman uses a dual-system theoretical framework (which established a foothold in cognitive and social psychology of the 1990s) to explain why our judgments and decisions often do not conform to formal notions of rationality:
- System 1 consists of thinking processes that are intuitive, automatic, experience-based, and relatively unconscious.
- System 2 is more reflective, controlled, deliberative, and analytical.
- Judgments influenced by System 1 are rooted in impressions arising from mental content that is easily accessible.
- System 2, on the other hand, monitors or provides a check on mental operations and overt behavior—often unsuccessfully.
Kahneman’s research covers a number of experiments which purport to highlight the differences between these two thought systems and how they arrive at different results even given the same inputs. Terms and concepts include coherence, attention, laziness, association, jumping to conclusions, and how one forms judgments. The System 1 vs. System 2 debate dives into the reasoning or lack thereof for human decision making, with big implications for market research, for example.
With the progress of Machine Learning Decision Making is entering a new era.
One of the most important conversations in the field of Machine Learning is the debate surrounding the use of predictive methods to influence human decisions. Broadly speaking, the field of Machine Learning is the practice of programming computers to be more self-sufficient, and to create systems that can operate on their own, with minimal human guidance. This can extend itself to anything from data collection, to data analytics, to creating decision trees.
Machine Learning processes and predictive methods can, hypothetically, make decisions for humans, but should they? And if we allow machine learning techniques to begin informing our decisions, or making decisions for us, where should we draw the line?
How can we leverage the mathematical science of algorithmic decision-making to enhance the abilities of the intuitive human cognitive decision system? Can the mathematical framework of engineering decision theory capture the salient characteristics of human decision behavior that do not fit unrealistic models of human rationality? Can our understanding of social and behavioral influences on human decision-making improve existing and future algorithmic decision systems that interact with people?
Answers to these questions will have deep implications in a variety of fields, such as data-driven and machine-aided decision-making systems; economic and financial forecasting that incorporates into algorithmic data analytics an understanding of the cognitive biases of investors and consumers; marketing and politics in the era of social networks; high-performance computer support for first responders and emergency operations combining statistical analysis based on machine learning with “soft” inputs provided by experts; and data representation and visualization for enhancing human decision-making.
There is a fundamental difference between the science of decision-making in engineering and the manner in which cognitive and social psychologists model and attempt to understand human decision-making. In engineering, decision systems are implemented in hardware and are designed through an optimization or game-theoretic framework in which an optimal policy attempts to maximize well-defined gains or minimize well-defined losses. These systems are not subject to the numerous cognitive biases that can plague human decision-makers as studied by behavioral economists.
Behavioral economics recognizes that people cannot instinctively understand the nonlinear aspect of probability. Machine Learning, however, is proving to be a very good methodology for automating data analysis, especially using probability theory. The probabilistic approach to Machine Learning is closely related to the field of statistics, but differs ever so slightly in terms of its emphasis and terminology. Probability theory can be applied to any problem involving uncertainty.
One way a machine can understand you and your preferences is to observe how you act. There is an approach called inverse reinforcement learning where if a Machine Learning system sees the decisions you make every day, it can begin to understand something about you. It observes how you spend your time, what money you spend on what items, how you communicate who you like to talk to, who you not like to talk to, etc. By observing your behavior, the Machine Learning system can begin to build a model of your preferences.
Daniel Kahneman’s book ‘Thinking, Fast and Slow’ became so popular because the approach to distinguish between two models of thinking (System 1 and System 2) feels intuitively correct. We all know the advice ‘think twice before you make a decision’ or ‘sleep over it before you act’. Extending Kahneman’s dual-mode of decision making with Machine Learning the following hypothesis is up for discussion:
- System 1 consists of human thinking processes that are intuitive, automatic, experience-based, and relatively unconscious (Kahneman)
- System 2 consists of human thinking processes that are more reflective, controlled, deliberative, and analytical (Kahneman)
- System 3 consists of Machine Learning (Thinking) processes that enhance and expand our own awareness to personal, economic, social and environment issues
One could debate that System 3 could eventually eliminate System 2. This discussion is important as it relates to the question of trust and ethics as both are a vital part of our social values.
Do we really want to leave our value judgements to machines or do we improve our decision making by integrating all three systems? To differentiate we might want to consider two types of decisions to be made: those where we leave the decision up to the machines and those where we want to maintain an integrated ‘man-machine’ view controlled by humans.
As Singularity is approaching, these issues need constant review as the improvements and application of Machine Learning will increasingly impact our personal and social existence.