A depiction of the world’s oldest university, the University of Bologna, founded 1088. Credit Wikipedia
Look at the individuals: not everybody is attentive; some are sleeping, chatting or observing their neighbor
The idea of conversational learning as part of educational theory is not new. Learning through conversation constitutes a simple form of pedagogy which has received a great deal of attention, certainly with regard to children of school age. Conversational learning is necessarily curtailed when children start school. Nevertheless, a number of indicators have emerged which suggest that conversational learning loses little, if any, of its potency as children grow older. Classroom teaching with the large amount of knowledge educators have to transfer leaves little room for conversations due to time and economic restraints. Yet it is widely recognized that formal lecturing and reading has a combined learning retention level of less than 20% as depicted by the following graph:
The cost of formal education has increased steadily while the access to top universities has become more restricted and selective. The introduction of on-line courses (MOOCs) has eased this situation and has generally raised the quality level of lecturing due to competing offerings. However, the basic problem associated with low levels of retention in formal education has not been solved. Applying advancements in AI and neuroscience provides new opportunities in conversational learning, improving the effectiveness of learning and supporting us in our life-long learning process. While todays’ learning mostly relies on ‘passive’ teaching and reading, the additional engagement of AI and neuroscience will lead to participatory, conversational teaching with a much higher retention rate.
This can be accomplished by applying the following technologies:
- Conventional chatbots (text communication)
- Voice activated chatbots
- Next generation chatbots enhanced with emotional sensing (voice and face recognition)
All these methods rely on machine learning algorithms mimicking the functioning of the human brain with neural network software. To perform these tasks, so-called ‘Narrow AI’ systems have to be trained for a specific chatbot service, handling huge data sets and requiring significant human and machine resources for their development.
As neuroscience and AI are beginning to complement each other, new learning algorithms like ‘reasoning’ and ‘relationship learning’ or new brain-inspired neural learning models are being developed to enable machine based personal assistants to act as coach without the restraints of ‘Narrow AI’. Neuroscience inspired AI is just at the beginning, in 2-3 years, however, the result of this cooperation will add new momentum towards the application of conversational learning.
Chatbot, short for chat robot, is an AI based computer program that simulates human conversation or chat. Typically, a chatbot will communicate with a real person, but applications are being developed in which two chatbots can communicate with each other. Chatbots are used in applications such as customer service, help desks, call centers and internet gaming. Chatbots used for these purposes are typically limited to conversations regarding a specialized purpose and not for the entire range of human communication. Most chatbots are trained with a large structured set of texts, or they use predefined responses to certain inputs in order to reply to the human user in written text.
Natural Language Processing (NLP), generating ‘story-like’ content based on spread-sheet information for business reports, for example, can also be used to produce content for chatbots. Companies engaged in this field like Narrative Science, Cambridge Semantics or Yseop are currently enhancing their product and service offerings with Natural Language Generation (NLG). Gartner’s recent report on the ‘Hype Cycle for Business Intelligence and Analytics’ sums up the difference between NLG and NLP as follows: “Whereas NLP is focused on deriving analytic insights from textual and numeric data, NLG is used to synthesize textual content by combining analytic output with contextualized narratives.” In other words, NLP reads while NLG writes. NLP systems look at language and figure out what ideas are being communicated. NLG systems start with a set of ideas locked in data and turn them into language that, in turn, communicates them to the user.
Voice activated chatbots
Apple’s difficult start with SIRI in 2011, introducing voice communication as chatbot on the I-Phone, was largely due to poor functionality and its unsympathetic, synthetic voice. Entering the market in 2014, Amazon’s combination of the Echo bidirectional speaker system and the Alexa voice-controlled digital assistant was far more successful, providing easy-to-use technology that can control gadgets in the home with a few spoken words as well as respond to verbal information requests bypassing the need for a smartphone. With a market share of over 70 % against competitors’ bidirectional loudspeaker products from companies like Apple or Google, Amazon has finally cracked the market for voice based conversational systems. Millions of units have been sold over the past two years, providing conversational voice-services with a growing list of over 15’000 information and service offerings delivered by 3rd parties.
The next generation chatbots
It is no secret that Amazon will announce a new version of the Echo loudspeaker, adding a small video camera mounted on top of the speaker. By combining voice with video-input opens the door to a new set of conversational applications. Face recognition linked to emotion sensing will empower applications that respond to the individual’s current emotional state. As voice modulation of artificial agents continues to improve, the voice response to a request might be aligned to the emotional state of the user. For example, making a frustrated help-request by a user having problems assembling new bathroom furniture could be responded with a calming voice aimed to reduce the frustration.
In a corporate context, team meetings can be enhanced by a 360° camera with a virtual coach responding to the individual questions and comments based on face recognition.
Neuroscience inspired AI
Neuroscience provides a rich source of inspiration for new types of algorithms and virtual brain models, independent of and complementary to the mathematical and logic-based methods and ideas that have largely dominated traditional approaches to AI. This gap is due to our incomplete knowledge of biological brains, the underlying mechanisms of cognition and the nature of consciousness itself. Similarly, this disparity is also due to the fact that the complex computations that drive AI can be delivered as a “black box” — it works, but we don’t really know why. As a result we might not trust the answers provided by the system. For AI to progress and evolve beyond a highly specialized level, and more toward a general intelligence approaching that of human-level complexity, AI researchers are beginning to collaborate with neuroscientists. Individuals and companies emerge with products and services that are crossing the gap between biological and human intelligence.
One such company is Numenta, originally co-founded in 2005 by Jeff Hawkins who also founded Palm in 1992 before it was sold to HP in 2010. Numenta is tackling one of the most important scientific challenges: reverse engineering the neocortex. According to Wikipedia the neocortex is the part of the human brain involved in higher-order brain functions such as sensory perception, cognition and generation of motor commands, spatial reasoning and language. Numenta’s machine intelligence technology, called ‘hierarchical temporal memory (HTM)’, is a computational, neuroscientific theory of the neocortex. Numenta believes that understanding how the neocortex works is the fastest path to machine intelligence and far more efficient than intelligence provided by conventional machine learning algorithms coupled with massive data training. They license their technology to 3rd parties as a building block for specific applications. The company Cortical.io for example, offers Natural Language Understanding (NLU) solutions based on Numenta technology, with a fundamentally new approach for handling ‘Big Text Data’ through extracting meaning and context in search or content filtering applications.
Also based on a hierarchical model of the Neocortex is the new SmartReply service provided by Google. Developed by a team headed by Ray Kurzweil, a Google director of engineering, the new version of Smart Reply increases the percentage of usable suggestions and is much more algorithmically efficient than the original system solely driven by machine learning. Smart Reply suggests up to three replies to an email message — saving you typing time, or giving you time to think through a better reply. Kurzweil’s software is based on his hierarchical theory of intelligence, articulated in his latest book ‘How to Create a Mind’ and in more detail described in an arXiv paper published in May 2017. In this paper Kurzweil and his team provide evidence that the neocortex is a self-organizing hierarchy of modules, each of which can learn, remember, recognize and/or generate a sequence, in which each sequence consists of a sequential pattern from lower-level modules. According to Kurzweil SmartReply is just the first step in a project codenamed ‘Kona’, aiming for nothing less than creating software as linguistically fluent as human natural language.
The combination of the various building blocks discussed will provide the foundation for a new generation of conversational learning systems. Conversational learning enhanced by neuroscience inspired AI and next generation chatbots will disrupt an education system that has worked for centuries. Imagine you have a 24 hour/7 day access to an artificial personalized coach, familiar with your personality, learning strengths and limitations, answering questions about a topic you just learned in school or via MOOC- learning will be much more fun and economically far more efficient both in terms of time spent as well as cost. The retention level will substantially increase and our motivation for life-long learning will remain high as our value for contributing to society and its economic and political institutions will be recognized and appreciated.