Picture Credit: upfrontanalytics.com
Up to about 20 years ago the ‘Homo Oeconomicus’ prevailed as a given standard in economic theory. This standard assumes that decision-making follows strictly rational and economic rules. As behavioral psychologists began to study the decision-making process a new academic discipline called Neuroeconomics emerged. It explores how economic behavior can shape our understanding of the brain, and how neuroscientific discoveries can guide models of economics.
Daniel Kahneman, one of the first researchers engaged in Neuroeconomics was awarded the 2002 Nobel Prize in Economic Sciences based on his empirical findings challenging the assumption of human rationality prevailing in modern economic theory. His book Thinking, Fast and Slow, which summarizes much of his research, was published in 2011 and became a best seller. Emotion, cognition, intuition, consciousness, creativity etc., they all are engaged in the decision-making process.
Since this time, efforts to understand the functioning of the human brain also in respect to decision-making have increased exponentially. Partially financed by government-sponsored initiatives such as the Human Brain Project of the EC or the BRAIN initiative of the US, over 100’000 researchers worldwide are working in brain research. Many of these efforts are focused on brain health issues such as Alzheimer or Epilepsy.
The field of Artificial Intelligence (AI) benefits from this research for example in the development of computational models to simulate brain functions such as learning, also referred to as ‘Deep learning’. It all started when machine learning, recognized as a separate field of AI, started to flourish in the 1990s applying methods and models borrowed from statistics and probability theory.
Combined with the continuing exponential growth of computing performance and the internet access to huge mostly unstructured data sets also referred to as ‘Big Data’ we have reached the point where systems are capable of interpreting textual content and to propose answers to questions raised.
The term ‘Cognitive Computing’ defines a new era of computing which combines AI with big data analytics. It became popular in 2011, when IBM’s Watson computer system defeated the two leading human Jeopardy champions in a widely broadcasted TV-Show. Jeopardy is a knowledge contest where humans answer questions in respect to their field of expertise.
From a neuroscience point of view cognition is far more complex than the functionality currently offered by IBMs Watson. However from a computer science point of view cognitive computing represents a fundamental change how business problems are solved.
IBM offers Watson as a service to the health-care industry to provide doctors a rapid assessment of the possible illness and potential cure based on the symptoms described by the patient. Huge files of patient histories and medical research papers are processed to define the most probable cause of the illness. This result is presented in minutes, far outpacing our human capacity to produce a similar analysis. Other industries such as finance, retail or telecom services are also on the way to implement cognitive computing applications.
Cognitive Computing can be a very powerful tool to improve our decision-making process and to support leaders to plot a successful future in a complex economic environment. In addition, Cognitive Computing will reduce the effort to perform analytic and communication tasks related to daily business operations.
As Singularity comes closer we can assume that AI algorithms and our capacity to model the human brain will dramatically improve. The fundamental question we have to ask ourselves is to what extent we leave the decision-making and its execution up to Artificial Intelligence Machines (AIMs)? Do we want to trust the result presented by the system? What happens if our intuition signals another decision?
Assisted driving as developed by Tesla is a real-time application of cognitive computing. In the case of assisted driving the driver is still responsible for the final decision to prevent an accident for example. The question is how much he trusts the algorithms of the cars computer system to just relax paying little attention to the traffic situation, possibly overlooking the potential for a serious accident?
In contrast fully autonomous driving as developed by Google works without a driver. One could pose the question what the system decides if in theory two accident conditions occur simultaneously and there is no third option to prevent an accident. How can the system assess what accident causes less damage or grief to the individuals involved?
Despite these issues government authorities in the U.S. are encouraging both development efforts, assisted and autonomous driving, as their analysis shows that in summary both efforts will eventually reduce traffic accidents.