From ‘Noise’ to better Decision-Making, an IT-Paradigm Shift?

Posted by Peter Rudin on 14. January 2022 in Essay

Brain and Decisions        Picture Credit:


‘Noise: A Flaw in Human Judgement’, co-authored by Nobel Laureate Daniel Kahneman, explores why people make bad judgments in making decisions and how to make better ones. Imagine that two doctors in the same city give different diagnoses to identical patients—or that two judges in the same courthouse give markedly different sentences to people who have committed the same crime. Suppose that when a company is handling customer complaints, the resolution depends on who happens to answer the phone. Now imagine that the same doctor, the same judge or the same customer service agent makes different decisions depending on whether it is morning or afternoon, or Monday rather than Wednesday. These are examples of noise: variability in judgments that should be identical. Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein show the detrimental effects of noise in many corporate scenarios. Wherever there is judgment, there is noise. Yet, most of the time, individuals and organizations alike are unaware of it. They neglect noise. Their new best-selling book explains how and why humans are so susceptible to noise and what we can do about it. Where there is judgment, there is noise—and usually more of it than one thinks. The authors believe that neither professionals nor their managers can make a good guess about the reliability of their judgments. The only way to get an accurate assessment is to conduct a noise audit. And at least in some cases, the problem will be severe enough to require action.

Noise and Brain Functionality

Neural variability provides the basis for how we perceive the world and react to it. The brain’s neural activity is irregular, changing from one moment to the next. So far this apparent ‘noise’ has been thought to be due to random natural variations or measurement errors. Now researchers at the Max Planck Institute for Human Development have shown that this neural variability may provide a unique window into brain functionality How the “noise” in our brain influences our behavior | Max-Planck-Gesellschaft ( The study published in the scientific journal Neuron, highlights what is now substantial evidence supporting the idea that neural variability represents a key, yet under-valued dimension for understanding brain-behavior relationships. “Animals and humans can indeed adapt successfully to environmental demands, but how can such behavioral success emerge in the face of neural variability? “We argue that neuroscientists must grapple with the possibility that behavior may emerge because of neural variability, not in spite of it,” says Leonhard Waschke, first author of the article. Strikingly, such irregularities in neural activity appear regardless of whether single neurons or entire brain regions are observed. Brains simply always appear ‘noisy,’ prompting the question of what such moment-to-moment neural variability may reveal about brain function. A recent study published in the journal eLife exemplifies the direct link between neural variability and behavior. Participants’ brain activity was measured via electroencephalography (EEG) while they responded to faint visual targets. When people were told to detect as many visual targets as possible, neural variability generally increased, whereas it was downregulated when participants were asked to avoid mistakes. Crucially, those who were able to adapt their neural variability to these kinds of tasks performed better. The better a brain can regulate its ‘noise,’ the better it can process unknown information and react to it. In the next phases of their research, the group plans to examine whether neural variability and behavior can be optimized through brain stimulation, behavioral training or medication.

Improve Decision Making: Differentiate between Noise and Bias

It has long been known that predictions and decisions generated by simple statistical formulas are often more accurate than those made by experts, even when the experts have access to more information than the formulas and their algorithms use. The key advantage of algorithms is that they are noise-free: Unlike humans, an algorithm  will always return the same output for any given input. In contrast, errors in judgment and decision making are typically caused by social biases like the stereotyping of minorities or cognitive biases such as overconfidence and unfounded optimism. The variability and complexity of noise, however, induces a type of error which is much harder to pinpoint. One reason the problem of noise is invisible is that people do not go through life imagining plausible alternatives to every judgment they make. It is obviously useful to an organization to differentiate between bias and noise in the decisions made by its employees but collecting that information is not easy. A major problem is that the consequences of decisions might not be known until far in the future, if at all. Loan officers, for example, frequently must wait several years to see how loans they approved worked out, and they almost never know what happens to an applicant they reject. Moreover, companies are confronted with the phenomena that experienced professionals tend to have high confidence in the accuracy of their own judgments, while, at the same time, they also have high regard for their colleagues’ intelligence. According to research published by the Harvard Business Review in 2016 Safari (, experienced professionals expect others’ judgments to be much closer to their own than they actually are. This inevitably leads to an overestimation of organizational agreement, particularly where judgments are so skilled that they are intuitive. Hence, the common assumption that collaboration among experts is improving decision-making, is likely to be false.

Solving the Problem of Data Bias at the Edge

The problem of noise is a behavioral as well as a brain functionality issue. To solve the behavioral problem in an organisational context, educated leadership needs to address these challenges, as customary with any new-technology coming to market due to its cost-saving potential. Reducing bias in data to improve decision-making, however, favours the application of intelligent systems, or as Daniel Kahneman puts it: ‘Clearly AI Is Going To Win’.

New insights about brain functionality improves the computational models to support decision-making. As today’s deep-learning algorithms and the application of neural networks is largely dependent on the computational capacity of a few, high-capacity data centers, there is a growing trend to decentralise. The hard-and software-resources and associated implementation tools to support edge computing are coming to market. ‘Edge’ implies the computational capacity of devices such as sensors or microchips, capable of processing simple, localized tasks. Following an era of centralization built upon a few high-performance data centers, there is now a countervailing force in favour of increased decentralization. Edge computing is still in the very early stages, but it has moved beyond the theoretical and into the real. The cloud as we know it is only one or two decades old. Taking the dynamics of scientific and technological progress, it will not be long before the edge leaves a big mark in the computing landscape.

Edge complements Cloud-Computing

Edge-computing complements Deep-Learning with its huge repository of knowledge and data, stored externally in the ‘Cloud’. It accelerates the decision-making process in time-critical situations. For example, in the case of vehicle automation, sensors provide a stream of real-time data to make time critical decisions like applying the brakes, adjusting the speed or alert in case of driver-fatigue. The same sensor data is streamed to the cloud to execute longer-term pattern analysis that can alert the owner to urgently needed repairs or warn of an unexpected road construction ahead based on information from cars that previously have recorded its location via cloud. Edge computing is a paradigm that brings computation and data storage closer to where it is needed. Hence, edge computing reduces network traffic going to and from the cloud, or what some are calling cloud- offload as a way to extract value from data where it is generated. This requires access to a new kind of infrastructure,  built upon a hybrid cloud-edge concept and much more geographically distributed than the few hyperscale data centers that comprise the cloud today. This infrastructure is becoming available, and it is likely to evolve in phases, with each phase extending the edge’s reach by means of a wider and wider geographic footprint. For example, AWS (Amazon Web Services) has decentralised their data centers across 22 geographic regions. An AWS customer serving users in both North America and Europe might run its application in both the Northern California region and the Frankfurt region, for instance. Going from one region to multiple regions can result in a big reduction in latency, and for a large set of applications, this will be all that is needed to deliver a good user experience. Beyond solving this latency problem, however, an entire new generation of real-time sensors provides opportunities for capturing data at the source, potentially free of bias, calling for a massive increase in internet bandwidth and the implementation of bullet-proof security procedures. Also dubbed as ‘Internet of Things (IoT)’ , the growth of devices equipped with intelligent sensors such as personal health monitors or supervisory equipment installed at factory sites is outpacing the growth of global individual users. In terms of improving decision-making, revamping the existing IT-infrastructure sets the stage for a highly creative design process, throwing overboard many ‘conventional’ techniques, only introduced a decade or two ago. This design of a hybrid cloud-edge system has to be tailored to a corporate’s specific product and service portfolio with an interdisciplinary mindset because human as well as machine resources need to be considered.

Conclusion: From Daniel Kahneman’s Noise Audit to the Corporate Decision Inventory

Following Kahneman’s own view that AI is going to win, a corporate decision inventory and the corresponding information required for reaching optimal decisions seems to be the ultimate path towards better decision-making. The winners will be those that are able to implement new hybrid cloud-edge IT-infrastructures efficiently, making use of rapid design principles such as testing different options before going live. This new wave of disruptive technology will be as significant as the drivers of the current industrial revolution, reducing the high cost and negative long-term consequences associated with wrong decisions. 


Leave a Reply

Your email address will not be published. Required fields are marked *