As Machines Get Smarter, Evidence Grows They Learn Like Us

Posted by Peter Rudin on 15. February 2019 in Essay

Machine Learning Algorithm Picture Credit:  Wikipedia

Introduction

Back in 2017 Denis Hassabis of Alphabet’s Deep Mind suggested intensifying the cooperation with Neuroscience to advance AI towards Artificial General Intelligence (AGI). Now, two years later, Artificial Neural Networks (ANNs) have made significant progress both in respect to diversity as well as quality. New brain-inspired network architectures and algorithms have evolved and to overcome the ‘black-box-syndrome’, tools are being developed to analyse the feasibility of a network-design to solve a specific problem. Ultimately, the goal of developing AGI will require a deeper investigation into our own remarkable abilities and new insights into the cognitive mechanisms we ourselves use to reliably and robustly understand the world. The ANN approach for understanding the brain contrasts sharply with the approach of the European Human Brain Project (HBP) and the much-hyped plan to create a precise simulation of a human brain using a supercomputer. Swiss neuroscientist Henry Markram wants to include as much detail as possible on every neuron and synapsis and hopes that full functionality and consciousness will emerge. Geoffrey Hinton, the highly regarded ‘father’ of ANNs, thinks that this mega-simulation will fail, mired by too many moving parts that no one yet understands. His approach stipulates, to start with, a highly simplified brain-model and gradually making it more complex. More generally, Hinton does not think the workings of the brain can be deduced solely from the details of brain imaging studies; instead, this data should be used to build and refine algorithms.

From Intelligence to Intuition

Intuition has a lot more to do with gut feeling rather than calculated decision-making. Being intuitive is not the same as being intellectual. These are really two different cognitive processes. Intelligence is based on what is known while intuition deals with the unknown. Intelligence is based on rational thinking whereas intuition is made possible by our “sixth sense.” In practice, most if not all people use a combination of both intelligence and intuition making decisions. Intuition comes from the primitive brain; it is an artefact of the early days of man when the brain’s ability to detect hidden dangers ensured our survival. Gerd Gigerenzer, director at the Max Planck Institute for Human Development, claims that intuition is instinctively recognizing and understanding the information which is needed to solve the problem instead of knowing the solution based on previous experience. He adds that intuition works better if you stay hungry for learning by researching new ideas and solutions. In his view this is how you train your mind to be intuitive. Albert Einstein once made the following statement about intuition and logic:

According to Wikipedia, intuition is the ability to acquire knowledge without proof, evidence, or conscious reasoning, or without understanding how the knowledge was acquired. Different writers give the word “intuition” a great variety of different meanings, ranging from direct access to unconscious knowledge, unconscious cognition, inner sensing, inner insight to unconscious pattern-recognition and the ability to understand something instinctively, without the need for conscious reasoning. In his theory of the ego, first documented in 1916, psychologist Carl Jung defined intuition as “perception via the unconscious”. Jung said that a person in whom intuition is dominant does not act based on rational judgment but rather on sheer intensity of perception. Jung thought that extraverted intuitive types were likely entrepreneurs, speculators, cultural revolutionaries, often undone by a desire to escape every situation before it becomes settled and constraining. In contrast, introverted intuitive types were likely to be mystics, prophets, or cranks, struggling with a tension between protecting their visions from influence by others and making their ideas comprehensible and reasonably persuasive to others.

Artificial Intuition

To create artificial intuition supposes the possibility of re-creating the human mind, capable to deal with semantics, common sense reasoning or learning. One reason we want machines to be able to think intuitively can concern our safety, for example applying artificial intuition to improve the safety of autonomous vehicles. Despite the use of onboard sensors and deep-learning software in combination with large data sets for learning, they are still prone to accidents on certain unforeseen occasions. With artificial intuition self-driving cars can anticipate and react to the unpredictable things that might happen while driving. For example, during rainy weather, self-driving cars are programmed to slow down and turn on its wipers by its sensors. With artificial intuition they can be trained to anticipate the dangers other drivers could create e.g. pulling to the side of the road if the rain gets worse.
Today’s ANNs are comprised of a feed-forward architecture, where information flows from input to output through a series of layers. Each layer is trained to recognize certain features, such as an eye or a whisker. That analysis is then fed forward, with each successive layer performing increasingly complex computations on the data. In this way, the program eventually recognizes a series of coloured pixels as a cat. But this feed-forward structure leaves out a vital component of the biological system: feedback. In the real brain, neurons in one layer of the brain are connected to their neighbours as well as to neurons in the layers above and below them, creating an intricate network of loops providing feedback. Neuroscientists do not yet precisely understand what these feedback loops are doing, though they know they are important for our ability to direct our attention, helping the brain to compare its predictions with reality. What is lacking in AI-machines right now are intuitive capabilities such as imagination, introspection and self-reflection, something a feedback circuitry might accomplish.

Gaming as a potential testbed for artificial intuition

From the early days of computing, games have been important testbeds for studying how well machines can achieve sophisticated decision making. In recent years, machine learning has made dramatic advances reaching superhuman performance, challenging humans in domains like Go, Atari, and some variants of poker. As with their predecessors of chess, checkers, and backgammon, these game domains have driven research by providing sophisticated yet well-defined challenges for artificial intelligence practitioners. Computers do not have emotions like humans, yet that is how to win a complex game like Go, to defy logic and base decisions on the possibility of an unexpected outcome. Understanding what the opponent is trying to do, based on things beyond the rules of the game, requires intuitive thinking. Late 2017, Alphabet’s DeepMind released AlphaZero, the successor to AlphaGo Zero. After 34 hours of learning, using only 4 TPU’s (Tensor Processing Units) on a single machine, it defeated its predecessor AlphaGo Zero in the game of Go. While games like Go require strategy, it is intuition that makes it a unique game. Somehow AlphaZero has developed its own machine intuition that allowed it to play the game much better than its predecessors. According to David Silver, Lead Programmer of AlphaZero: “By not using human data — by not using human expertise in any fashion — we’ve actually removed the constraints of human knowledge. It was therefore able to create knowledge and intuition from a blank slate.” In London last month, DeepMind quietly set a new marker in the contest between humans and computers. By beating one of the best StarCraft players, it adapted algorithms developed to process text to the task of figuring out what battlefield actions lead to victory. Videogames like StarCraft are mathematically more complex than chess or Go and they are highly intuitive as they are played in real-time with little time to elaborate on conscious decisions. 

Computational Models of Intuitive Physics

Humans have a powerful “physical intelligence” – an ability to infer physical properties of objects and predict future states in complex, dynamic scenes – which they use to interpret their surroundings and to plan safe and effective actions. For instance, you can choose where to place your coffee to prevent it from spilling, arrange books in a stable stack, judge the relative weights of objects after watching them collide, and construct systems of levers and pulleys to manipulate heavy objects. These behaviours suggest that the mind relies on a sophisticated physical reasoning system and for decades cognitive scientists have been interested in the content of this knowledge, how it is used and how it is acquired. In the last few years there has been significant progress in answering these questions in formal computational terms, with the maturation of several different traditions of cognitive modelling that have independently come to take intuitive physics as a central object of study. As the principal AI investigator at MIT’s Centre for Brain, Minds, and Machines (CBMM), Josh Tenenbaum is widely respected for his interdisciplinary research in cognitive science and AI. Human common sense involves the understanding of physical objects which Tenenbaum believes can be explained through intuitive theories. This “abstract system of knowledge” is based on physics (for example forces or masses) and psychology (for example desires or beliefs). Such intuitions are present already in young infants, bridging perception, language, and action planning capabilities. Tenenbaum suggests tackling the problem by using a new class of programming language which is a compound of symbolic language, probabilistic inference, hierarchical inference (learning to learn), and neural networks. Research in intuitive physics and psychology are especially promising in the field of robotics. A robot that knows intuitive physics can navigate the environment and perform nuanced actions such as carrying a cup of coffee, grasping a party balloon, and so on. Tenenbaum is aiming for an AI that understands the physical and psychological landscapes it will exist in, providing systems that allow us to deepen our own understanding of intelligence and intuition.

Conclusion

To fully mimic the human brain, intelligent machines still have a long way to go. Consciousness, intuition and common-sense reasoning represent major barriers to overcome. To reach AGI, data-mining and pattern-seeking deep neural networks may not be enough. Bayesian networks, causal models, and predictive coding might enhance deep-learning.  As new computational methods continue to be provided by research, intelligent machines are likely to match humans in learning, possibly within the next ten years. The human brain and mind are very complex and so are the machines that emulate the human brain. Hence humans will have to live up to the task of guiding these machines in a positive direction to support our social, economic and emotional well-being in the decades to come.

Leave a Reply

Your email address will not be published. Required fields are marked *