Baysian Analysis Picture Credit: nytimes.com
Certainty is something that is known to be true without any doubt and can be proven or verified. For example, the statement ‘2+2=4’ is a proof of certainty. In contrast, Bayes’ theorem of probability can help us to determine the degree of certainty we assign to a claim based on available evidence. While the theorem can be useful for making statistical predictions, it may not be applicable in situations where moral considerations are important. However, as Neuroscience continues to advance, one can raise the question if the functionality of the human brain follows Bayesian principles.
Defining Certainty in AI-Systems
Certainty describes how accurately an AI-system can predict patterns or trends based on input provided to the system. The result of an inquiry will depend on multiple factors such as the programming language used, type of data being fed to the system and the outcome expected. The certainty factor (CF) is an AI-concept which was first proposed by computer science professor and mathematician Lotfi Zadeh in 1965. The factor describes the range of certainty of an AI-system’s predictions. The higher the CF the greater the confidence in producing accurate results. The CF can be defined by comparing the performance of systems that use similar algorithms and domain knowledge provided by experts as well as the expected reliability scores from actual feedback and data analysis associated with similar algorithmic models. It is an important metric for businesses before enacting decisions considering the environment the organisation is confronted with. The definition of certainty is critical to assess the potential risks and benefits of AI-systems as data analytics may show a high level of uncertainty due to their novelty and short amount of operating time available for validating their accuracy. Robots, for example, that work in proximity with humans or other robots must be able to assess their environment with precision and reliability so they can safely move around without collision, identifying potential dangers to enhance the collaboration between humans and machines. Hence, a special type of certainty factor might be established based on the robot’s position relative to nearby objects, allowing it to plan 3D motion paths more accurately.
Certainty in Philosophical Terms
For many centuries certainty has continued to be a much-discussed topic among philosophers. Some consider mathematical knowledge to be the goal which philosophy should aim at. Descartes, for example, tries to establish absolute certainty in his famous Cogito, ergo sum: ‘I think, therefore I am.’ He provided the seed for a new philosophy in contrast to the Scholastics’ method with their reliance on sensation as the source for all knowledge. Based on a rational investigation of every relevant problem examined from opposing points of view to reach an intelligent scientific solution, he proposed a concept that would be consistent with accepted rules of public authorities. In addition, he wanted to replace the Scholastic’s causal model of scientific explanation with a more modern, mechanistic model. This concept assumes the separation of the mind from the body (duality) and a mechanistic model of physics based on geometry. As with knowledge, it is difficult to provide an uncontentious analysis of certainty. There are several reasons for this. One is that there are different kinds of certainty which are not easy to blend. Another is that the full value of certainty is surprisingly hard to capture. A third reason is that there are two dimensions to certainty: It can be individually certain at a particular moment, or it can be certain over some greater length of time in a system of multiple accumulated beliefs. A belief is psychologically certain when an individual is totally convinced of its accuracy and is not willing to abandon it.
According to an Essay Probability Theory Definition | DeepAI, one can conclude that a probability theory represents a branch of mathematics that deals with the analysis of random phenomena. Probabilities are typically expressed as fractions or percentages. A probability of zero means an event is impossible, while a probability of one means an event is certain. Conditional probability is the probability of an event occurring given that another event has already occurred. For instance, if we flip two coins, the probability of getting heads on the second coin is not affected by whether we got heads or tails on the first coin. Bayes’ well known probability theory suggests that we update a belief about the likelihood of an event based on new evidence. This approach is widely used in various fields, including statistics, machine learning and decision-making processes. As a result, Bayes’ probability theory is the basis of many applications, for example in finance to model market risks or to provide pricing recommendations for different trading options. In computer science algorithms often rely on probabilistic methods for tasks such as data compression and error correction. In the realm of science, the probability theory is essential for statistical inference, allowing the researchers to draw conclusions from data subject to random variation. It is also integral to the study of quantum mechanics, where the behavior of particles is inherently probabilistic. A probability theory is a versatile and powerful concept that helps us navigate through uncertainty and make informed decisions. It provides a framework for understanding and quantifying randomness, which is a feature of systems defined and built by humans. Whether it is predicting the outcome of a game, assessing risk or analysing complex data, a probability theory is a key component of logical reasoning and analytical thought.
Does Brain Functionality follow Bayesian Principles?
The question of how the mind works is at the heart of cognitive science to understand and explain the complex processes underlying perception, decision-making and learning. According to Are our brains Bayesian? – Bain – 2016 – Significance – Wiley Online Library, some neuroscientists propose a Bayesian brain theory which represents a mechanistic and mathematical formulation of these cognitive processes. The theory assumes that the brain encodes beliefs as probabilistic states to generate predictions based on sensory input and continuously uses errors encountered by the predictions to update its beliefs. Recently AI-researchers have made impressive use of Bayesian inference as a means of modelling some human cognitive capabilities. This raises the question if human judgements and decisions adhere to similar rules? The human brain is made up of 90 billion neurons connected by more than 100 trillion synapses. We know how different areas of the brain control different behaviour of our bodies and our emotions, and how these distinct regions interact with each other. The questions that are more difficult to answer relate to the complex decision-making processes we are confronted with. How do we form beliefs, assess evidence, make judgements, and decide on a course of action? Based on Bayes’ fundamental theorem, Bayesian inference provides a method of updating beliefs in the light of new evidence, with the strength of those beliefs defined by its probabilities. This is considered to represent an optimal way of assessing evidence and judging the impact of probability in real-world situations, with the assumption that this approach provides a rational way to integrate different sources of information to arrive at an output. According to Professor Daniel Wolpert of the University of Cambridge’s neuroscience research centre, to obtain the clearest evidence of Bayesian reasoning in the brain, we must look beyond the high-level of cognitive processes that govern how we think to assess the evidence of the unconscious processes that control perception and movement. He believes that, due to the functionality of our Bayesian brains and the real-time predictions continuously made, our bodies move controlled and efficiently through the environment we are confronted with. As we experience life our brains gather statistics about different movements to combine these in a Bayesian fashion with previous data, thereby adapting to constantly changing real-world situations.
Bayesian models of cognition remain a hotly debated issue among AI-researchers. Critics complain of too much rationalisation, with researchers tweaking their models to make almost any results fit a probabilistic interpretation. They warn that the Bayesian brain theory is used as a one-size-fits-all explanation for human cognition. In a review of past studies on Bayesian cognition, Gary Marcus of New York University concludes that the probabilistic approach to make a lasting contribution to researchers’ understanding of the mind is only documenting the obvious facts that people are sensitive to probabilities and that they adjust their beliefs based on evidence. Nevertheless, as neuroscience continues to advance, we can expect more answers to the question whether the functionality of our brains follows Bayesian principles.