The IoT-Network Picture Credit: betanews.com
Introduction
Ongoing research in neuroscience indicates that brain functionality is defined by a combination of cognitive (thinking) and perceptive (sensing) capabilities. The human brain is regarded as a ‘prediction machine’ that assembles its own image of reality through inference, partially due to its limited energy capacity of 20 to 30 Watts and its physical neural memory restrictions. Mapping this capability with new sensor technology and an increase in internet bandwidth, requires a major shift in the architecture of IT-systems that represent the backbone of today’s corporate infrastructures. Deep learning (Thinking Slow*) in combination with real-time data provided by sensing such as vision, touch or sound (Thinking Fast*), necessitates new concepts in the decision-making process supported by AI-systems (*See also Daniel Kahneman’s bestselling Book ‘Thinking, Fast and Slow’). On the upside this change has the potential for substantial economic gains, on the downside existing privacy and security problems are likely to be magnified.
Human Cognition and Perception
Cognition can be defined as the mental processes which enable us to remember, think, know, judge, solve problems, etc. It basically assists an individual to understand the environment and to gain knowledge. Cognition can include both conscious and unconscious processes. Perception is the process by which we interpret the things around us through sensory stimuli. Perception is the sensory experience of the world. It involves both recognizing environmental stimuli and reacting to these stimuli. Through the perceptual process, we gain information on the properties and elements of the environment that are critical to our survival. Perception not only creates our experience of the world around us – it allows us to act within our environment. This can be through sight, sound, taste, smell, and touch. When we receive sensory information, we not only identify it but also respond to the environment accordingly. In daily life, we rely greatly on this sensory information for even the smallest of tasks. For example, we usually tend to look both ways before crossing the road. In such an instance, it is the sensory information gained through sight and sound that gives the signal for us to cross the road.
IoT has arrived
While there is no standard definition, IoT describes “the extension of network connectivity and computing capability to objects, devices, sensors and items not ordinarily considered to be computers” (Internet Society 2015). IoT devices can sense, generate, store, and send data, and sometimes respond to commands via actuators that can modify the physical world (Robots). A diverse range of IoT devices are found in homes, retail businesses, public spaces, hospitals, vehicles, utility infrastructure or are worn by consumers for healthcare and fitness applications. Virtually every consumer electronics device is now sold with an IOT connection. According to a study conducted by Cisco, IoT included about 30 billion devices in 2020 with an estimate that by 2022 half of the total global Internet traffic will be generated by IoT. By 2030, according to a new study by McKinsey, IoT could contribute between USD 5.5 trillion and USD 12.6 trillion in value globally. The study also makes the point that capturing this value depends largely on establishing interoperability and the easing of cybersecurity concerns. Termed ‘Edge-AI’, a new generation of IoT-devices can execute intelligent tasks locally at the chip-level. Hence, Edge-AI complements the Deep-Learning capability with its huge repository of knowledge and data stored externally, accelerating the decision-making process in time-critical situations. For example, in the case of vehicle automation, sensors provide a stream of real-time data to make time-critical decisions like applying the brakes, adjusting the speed or alert in case of driver-fatigue. The same sensor data is streamed to external storage (cloud) to execute longer-term pattern analysis that can alert the owner to urgently needed repairs or warn of an unexpected road construction ahead. Considering the analogy to the human prediction-mechanism and its potential of making errors due to inference, qualitative human judgement in the decision-making process is mandatory.
Neural Theories: From Perception to Prediction
How our brain and its three-pound mass of tissue creates perceptions from our senses is a long-standing mystery. Abundant evidence and decades of sustained research suggest that the brain cannot simply assemble sensory information as though it were putting together a jigsaw puzzle to perceive its surroundings. This view gathered force in the 1860s, when the German physicist and physician Hermann von Helmholtz argued that the brain infers the external causes of its incoming sensory inputs rather than constructing its perceptions “bottom up” from those inputs. Helmholtz extended this concept of “unconscious inference” to explain bi-stable or multi-stable perception, in which an image can be perceived in more than one way. This occurs, for example, with the well-known ambiguous image that we can perceive as a duck or a rabbit: Our perception keeps flipping between the two animal images. In such cases, Helmholtz asserted that the perception must be an outcome of an unconscious process of top-down inferences of sensory data since the image that forms on the retina doesn’t change.
Consequently, many neuroscientists are adapting to the view that the brain is a ‘prediction machine.’ Through predictive processing, the brain uses its prior knowledge of the world to make inferences or generates its own hypotheses as to the causes of incoming sensory information. Those hypotheses – and not the sensory inputs themselves – give rise to perceptions. The more ambiguous the input, the greater the reliance on prior knowledge. Computational neuroscientists have built artificial neural networks inspired by the behavior of biological neurons that learn to make predictions about incoming information, mimicking those of real brains. While the exact details of how the brain accomplishes this task remain hazy, it appears that a biological neural network, in order to minimize the use of energy, will end up implementing some sort of predictive processing based on perception.
The Downside: An Increase In Surveillance Capitalism?
As the deployment of IoT increases massively the number of nodes accessible by the internet, the volume of behavioral data for Deep-Learning analysis will ‘explode’ with all its positive and negative implications already in place. In her 2019 bestseller, The Age of Surveillance Capitalism : Professor Shoshana Zuboff : 9781781256855 (bookdepository.com), Zuboff states that surveillance capitalism “unilaterally claims human experience as free raw material for translation into behavioural data [which] are declared as proprietary behavioural surplus, fed into advanced manufacturing processes known as ‘machine intelligence’, and fabricated into prediction products that anticipate what you will do now, soon, and later. The danger of surveillance capitalism is that platforms and tech companies claim ownership of private information because it is free for them to access, claiming private experience as ‘raw material’ for data factories”. Lastly, as invasive as these platforms have been in terms of accumulating information, they have also led to what is now called a ‘sharing economy’. Digital information can be obtained by organizations carrying out their own surveillance capitalism through the support from their own platforms. Experience shows that organizations can greatly benefit from this transformational model because it empowers them to set up new business concepts. To profit from the potential of IoT and a ‘sharing economy’, new IT-architectures must be implemented, combining the processing of cloud-based as well as sensor-based data, thereby improving speed and effectiveness in the decision-making process. Surveillance capitalism can be an exceptionally useful tool for businesses, but it also represents a source of invasion of privacy to users who do not want their private experience to be owned by a company. Current Government efforts to ensure integrity and privacy of internet communications are largely focused on Big Tech’s market control and their responsibility in keeping content free of bias and free of fake information. A main driver in this direction is the EU with the Digital Markets Act (DMA) and the Digital Service Act (DSA) in support of two main goals: a) to create a safer digital space in which the fundamental rights of all users of digital services are protected and b) to establish a level playing field to foster innovation, growth, and competitiveness, both within the EU and globally.
Conclusion
While these initiatives are also governed by serious ethical concerns, the issue of cybercrime is not addressed. To make headway in that respect, the entanglement of Government controlled espionage and secret service operations as well as a global consent regarding punishment would be required. IoT based IT-infrastructures can be a ‘nightmare’ in at least two ways: Security against data theft in a highly fragmented processor-architecture and theft of predictive data generated by real-time transactions coupled with the application of new AI-algorithms. To illustrate the latter, consider an organization or community whose prediction is detected by its competitors way before a decision is made. We are moving from a reaction- to a prediction-economy with far reaching implications not yet fully understood. Nevertheless, the quality of human judgement and the quality of the data processed remain key success-factors for business operations while the potential for misuse, in terms of privacy and integrity, continues to grow with no end in sight.