Picture Credit: Book cover Penguin Books Ltd.
Introduction
Stuart Russell, Professor of informatics at Berkley, is co-author of one of the most widely read textbooks “Artificial Intelligence: A Modern Approach”, first published in 1995 and key to moving the subject of AI from being dominated by a formal logic approach to one that is focused on machine agents maximising rewards. With current and imminent applications of AI, machines are intelligent to the extent that their actions can be expected to achieve the machine’s objectives.
In his most recent book, “Human Compatible”, Russell takes on the challenges of AI to issues such as human purpose, authority and basic wellbeing. The major challenge ahead is to move AI from a data-centric to a human-centred approach, also referred to the creation of a General-Purpose AI (GAI). Currently AI produces highly specialised agents, also referred to as Narrow-AI (NAI). The breakthrough that would enable AI to escape from the constraint of narrow specialism has yet to be made. However, this is the ultimate goal of AI research. A GAI would learn from all available sources, observe and question humans if needed, and formulate and execute plans that work. Although a GAI does not yet exist, Russell argues that a lot of progress will come from continuing research in NAI plus a breakthrough on how AI technology is understood and organised.
The GAI will be distributed and decentralised; it will exist across a network of computers. It may enact a specific task such as cleaning the house by using lower level specialised machines, but the GAI will be a planner, coordinator, creator and executive. The danger, according to Russell, comes from the aim that has guided the AI research so far, that is to give AI objectives that become the AI’s own objectives divorced from human objectives. The solution proposed by Russell is that GAI is engineered to enact preferences of individual humans:
- The machine’s only objective is to maximise the realisation of human preferences.
- The machine is initially uncertain about what those preferences are.
- The ultimate source of information about human preferences is human behaviour.
According to Russell, these principles should apply to the design, the formulation and the regulation of future research programmes and the societal efforts that will be required to deal with unintended consequences as well as the actions of some less than benevolent humans. It is a human-centred approach in which humanity can reap the benefits of AI research without being subjugated to a superior intelligence with machines dominating humans. Getting there, however, requires overcoming barriers such as understanding context imbedded in language, the comprehension and application of common sense or the exploration of human consciousness. Meanwhile designing and practising collective intelligence as a means to empower individuals and organisations with human-centred AI will help to overcome the current limits of NAI.
Implementing Collective Intelligence to advance AI
Early in the history of computing, pioneers such as Alan Turing and John McCarthy subscribed to the vision that the human mind was computational and could be simulated with computer hardware and software. Attempts to renew AI by abandoning the notion of representation and focusing on embodied systems have produced more biologically plausible machines and a more accurate view of human intelligence. Despite these efforts we are still far away from simulating human-level intelligence computationally. However there exists a separate and forgotten stream of work that started at the same time as when McCarthy and Minsky’s worked on AI projects back in the 1950s: The Human Augmentation Project of Douglas Engelbart to achieve collective intelligence. Instead of attempting to replicate human intelligence, Engelbart sought to augment existing human intelligence via computational functions that complemented human intelligence.
A complementary combination of both intelligences (human and artificial) could help to overcome the other’s shortcomings and limitations. Analogous to the past when the automation of human tasks completely changed manufacturing and logistics – igniting an evolution in the offering of products and services – the combination of human and artificial intelligence will create a new type of collective intelligence capable of building more efficient organizations.
Leveraging data and people’s expertise in new ways offers a path forward for smarter decisions, more innovative policymaking, and more accountability in governance. The first of these innovations involves Artificial Intelligence (AI). AI offers unprecedented abilities to quickly process vast quantities of data that can provide data-driven insights to address problems and to offer predictions. The second is Collective Intelligence (CI). Tapping into the “wisdom of the crowd” and its underlying common sense, CI motivates groups to create better solutions than the smartest experts working in isolation could ever achieve. These two areas of innovation have been researched separately until recently. However, significant benefits can be achieved through integration and mutual reinforcement based on the following assumptions:
- While CI is built around the idea that groups of citizens or experts can be smarter and more effective than individuals, scaling CI initiatives can be difficult. This is largely because of the transaction efforts involved in CI. Unlike the more automated processes of AI, CI typically involves a substantial degree of human effort in curating, inviting and enabling the participation of individuals or institutions. Consequently, CI can be fairly labour intensive and is hard to automate. If implemented effectively, however, automation through AI could indeed save time and effort, leading to what we call “Augmented Collective Intelligence”.
- Much of the concern surrounding the expansion and evolution of AI revolves around its perceived “inhumanity.” AI is a black box. Despite calls for greater algorithmic transparency, the fact remains that the creators of AI algorithms do not always understand the actions or results thrown up by their creations. CI has a potentially valuable role to play here, too. For example, introducing a human element into AI through coordinated CI efforts could help to uncover biases embedded in datasets and demystify the analytics performed on those datasets.
- Emerging research suggests that CI may have a role to play not only in increasing the ethical legitimacy of AI, but also increasing AI’s effectiveness. Researchers at MIT, for instance, have experimented using crowdsourced expertise to identify the main “features” of big data sets. Similarly, other researchers have been applying social techniques of learning used by humans to create more intelligent, artificial neurons in AI algorithms and deep-learning networks; the goal is to make individual neurons learn from each other in much the same way that humans use social and cultural contexts to make decisions.
To advance integration, a concept of open governance and guidance can be embedded into both AI and CI. Efforts to introduce greater openness into AI are likely to lead to more integration with CI. Likewise, as CI seeks to move beyond limited communities of participation, AI can play an essential role in selecting actors and stakeholders that may widen the collective conversation. In these ways, open governance and its underlying principles play a valuable role in the efforts to both strengthen AI and CI, as well as in bringing these two pillars of innovation closer together.
The Path towards Creativity and Wisdom
For a long time, the research focused on group creativity seemed to be stalling. Some studies reported that priming individualism enhanced creative ideation in brainstorming sessions, and it was broadly argued in the literature that collectivism has a negative effect on group creativity due to its emphasis on conformity and cohesion. However, other studies are emerging which show that when groups work together toward a common goal and adopt a more pro-social rather than a pro-self-attitude, they tend to perform better and are more creative overall.
Businesses tend to concentrate on educating employees on a specific skill in traditional, “teach-and-test” classroom settings. These sessions aim to build expertise one subject at a time, with the goal of cultivating specialists in defined categories. Today, such technical skills have a short life, with new tools and practices quickly coming and going. It is better to cultivate the understanding of multiple things, developing “T-shaped” talent as opposed to “I-shaped.” While an I-shaped person has deep expertise in one area, a T-shaped individual can bridge many different fields even though he/she does not master them in depth. These individuals do not have to be experts—they just need to understand enough to collaborate with each other, as well as with those who are experts. For a decade, the Massachusetts Institute of Technology (MIT) Centre for Collective Intelligence (CCI) has studied how groups of people can act more intelligently and learn better by collaborating with each other—as well as collaborating with intelligent machines. The result is a network of people’s knowledge and machines akin to the human neural network that connects the different, specialised parts of the brain. This collective intelligence approach enables workforces to keep up with the rapid pace of technological change and to apply creativity in solving problems.
Wisdom has been defined as the highest form of human intelligence. It represents the ability to think and act using knowledge, experience, understanding, common sense and insight. Wisdom is associated with attributes such as unbiased judgment, compassion, experiential self-knowledge and virtues such as ethics and benevolence. Collective Intelligence, augmented by AI, is likely to enhance the development of human wisdom, enriching our individual life vis-à-vis the ever-growing complexity of scientific discovery and its impact on our society.