Picture Credit: Wikimedia Commons
Adapting to increasingly digital market environments and taking advantage of digital technologies to improve operations are important goals for nearly every contemporary business. This digital transformation process is characterized by:
- flattening of corporate hierarchies towards horizontal networked teams
- collaboration-oriented IT infrastructures tearing down knowledge silos
- machine-supported knowledge generation based on ‘real-time’ data of corporate activities
- experimentation with new business models to fight off the threat of business disruption
In an ongoing effort to move ‘narrow, application-specific’ AI to a new level of Artificial General Intelligence (AGI), machine learning experts are teaming-up with neuroscience researchers, creating the stage for a new scenario of corporate disruption. While current machine learning efforts are mainly focused on extracting knowledge from ‘big-data’, applying sophisticated algorithms to specific applications like language translation, image recognition or customer profiling, the next stage of AGI will provide systems with learning and decision making capabilities equivalent to those of a human.
While corporations and society at large presently tackle the potentially negative consequences of digital transformation such as the threat of raising unemployment or the loss of privacy, many issues of government control over the current transformation process are not resolved. One such issue is related to the application of antitrust-laws to prevent market control by a dominant company. According to an article published by Harvard Business Review, July 6, 2017 the coming battle in antitrust will not be about controlling markets in the traditional sense. It will be about the battle for control over consumers’ information. Tech giants like Google or Amazon are competing to see which one of them can build a better digital replica of their consumers. Tomorrow’s monopolies will be based on how much tech giants know about us and how much better they can predict our behavior than their competitors. Antitrust enforcers are a long way from being equipped to guard against its potential anticompetitive effects. Yet this is where the digital world is taking us.
AGI Transformation: with human-like learning towards artificial wisdom
Wikipedia defines wisdom as the ability to think and act using knowledge, experience, understanding, common sense and insight. Wisdom has been regarded as a habit or disposition to perform an action with the highest degree of adequacy under any given circumstance. This involves an understanding of people, objects, events, situations and the willingness as well as the ability to apply perception, judgement and action in keeping with the understanding of what is the optimal course of action. In short, wisdom is a disposition to find the truth coupled with an optimum judgement as to what actions should be taken.
Picture Credit: designedforlearning.com
Currently, most AI systems are based on layers of mathematics that are only loosely inspired by the way the human brain works. Different types of machine learning, such as speech recognition or identifying objects in an image, require different mathematical structures, and the resulting algorithms are only able to perform very specific tasks.
Building AI that can perform general tasks, rather than niche ones, is a long-held desire in the world of machine learning. But the truth is that expanding those specialized algorithms to something more versatile remains an incredibly difficult problem, in part because human traits like inquisitiveness, imagination, and memory don’t exist or are only in their infancy in the world of AI.
In a paper published in the journal Neuron, Demis Hassabis, CEO of Google’s DeepMind subsidiary and three coauthors argue that only by better understanding human intelligence can we hope to push the boundaries of what artificial intellects can achieve. But they also point out that more recent advances haven’t leaned on biology as effectively, and that a general intelligence will need more human-like characteristics—such as an intuitive understanding of the real world and more efficient ways of learning. The relevance of neuroscience, both as a roadmap for the AI research agenda and as a source of computational tools is particularly important in the following key areas:
Relational Reasoning: The knowledge of core concepts relating to the physical world, such as space, number, and object definition is vital to our intuitive understanding. AI research has begun to explore methods for addressing this challenge. For example, novel neural network architectures have been developed that interpret and reason about scenes in a humanlike way, by decomposing them into individual objects and their relations. The ability to reason about the relations between entities and their properties is central to generally intelligent behavior.
Efficient Learning: Human cognition is distinguished by its ability to learn rapidly about new concepts from only a handful of examples, leveraging prior knowledge. Recent AI research has developed networks that ‘‘learn to learn,’’ acquiring knowledge on new tasks by leveraging prior experience with related problems. Once again, this builds on concepts from neuroscience: learning to learn was first explored in studies of animal learning and has subsequently been studied in developmental psychology.
Transfer Learning: Humans also excel at generalizing or transferring generalized knowledge gained in one context to a novel, previously unseen domain. For example, a human who can drive a car, use a laptop computer, or chair a committee meeting is usually able to act effectively when confronted with an unfamiliar vehicle, a new operating system, or a difficult social situation. More generally however, how humans or other animals achieve this sort of high-level transfer learning is unknown and remains a relatively unexplored topic in neuroscience. New advances on this front could provide critical insights to spur AI research toward the goal of lifelong learning.
Imagination and Planning: AI research on planning has yet to capture some of the key characteristics that give human planning abilities their power. In particular a general solution to this problem will require understanding how rich internal models can be learned through experience. Ultimately these flexible, combinational aspects of planning will form a critical foundation of what is perhaps the hardest challenge for AI research: to build a system that can plan hierarchically, is truly creative, and can generate solutions to challenges that currently elude even the human mind.
The quest to develop AGI will ultimately lead to a better understanding of our own minds and thought processes. Distilling intelligence into an algorithmic construct and comparing it to the human brain might yield insights into some of the deepest and the most enduring mysteries of the mind, such as the nature of creativity, dreams, and perhaps one day, even consciousness.
To avoid misuse of the consequences of artificially generated human-like wisdom, actions both on the regulatory as well as the educational level are required.
Speaking at the National Governors Association meeting in Rhode Island a few weeks ago, Tesla CEO and AI visionary Ellon Musk called AGI “the biggest risk that we face as a civilization”. Whether it’s an accurate assessment is very much up for debate. Even so, Musk would like lawmakers to do something about what he sees as a huge existential threat before it becomes too big a problem. “AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it will be too late.” It’s inarguably worth thinking about the impact that artificial intelligence might have on the world, but to many supporters of AGI it still seems too early to regulate such systems. While machine-learning software is beginning to rival human intellect at some specific tasks—such as speech recognition, translation, and identifying objects in images— advancing those components into more general intelligence without government regulation troubles Musk.
Some of the governors were shocked by Musk’s urgency. Arizona governor Doug Ducey, for instance, said that he was “surprised” to hear a call for regulations on AI “before we know what we are dealing with.” Indeed lawmakers traditionally need some sense of what needs to be regulated against before they will consider legislation.
As universities, research institutions and government organizations spend enormous amounts of money and resources to advance AI, one can make the case that these institutions also share the responsibility to advance human knowledge to deal with the consequences of AGI. Traditionally, schools share the responsibility to build character and wisdom along with parents and the community. Nicholas Maxwell, a contemporary philosopher in the United Kingdom, advocates that academia ought to alter its focus from the acquisition of knowledge to seeking and promoting wisdom, which he defines as the capacity to realize what is of value in life, for oneself and others.
Within the corporate world, human resource management and personal development to handle social, economic and technical complexities with the application of artificial wisdom will become a key market differentiator. To prepare humans for these tasks requires forward looking research as well as education at the university level. We have to define the role of institutions and humans to deal with the consequences of AGI and we need to understand how to go about it. At this point in time it appears that we are not ready to positively embrace AGI. How does Ellon Musk judge the situation? “Right now the government doesn’t even have insight,” he said, according to the recent article in the the Wall Street Journal. “Once there is awareness people will be extremely afraid, as they should be.”