How to communicate with our future personal assistants?

Posted by Peter Rudin on 30. August 2016 in Essay

personal-assistants  Picture Credit: Randy Glasbergen

The idea that a computer takes on the role of a personal assistant has been around for many years. The smartphone and its internet access to ‘the cloud’ has become our mobile companion with Apps available to provide instant personal advice ranging from travel planning all the way to health issues to name just a few.  With Singularity coming closer, the smartphone as personal assistant is likely to provide far more value, potentially enhancing our life experience. These assistants will be based on artificial intelligence machines (AIMs) physically modeled on today’s smartphone but with a computational capacity equivalent to today’s most powerful computer systems. For example these AIMs could be used to expand our memory to store data beyond our brains capacity via external devices and to support our decision-making by applying deep learning algorithms with the data stored.

A powerful demonstration of the progress in deep learning algorithms was given in March 2016 as AlphaGo, a computer program developed by Google Deep Mind beat the world’s best Go player Lee Sedol 4:1. Whereas chess players are able to look a few moves ahead, in Go this isn’t possible without the game unfolding into intractable complexity. There is also no straightforward way to measure advantage, and it can be hard for even an expert player to explain precisely why he or she made a particular move. AlphaGo wasn’t told how to play Go at all. Instead, the program analyzed hundreds of thousands of games and played millions of matches against itself building-up an intuitive sense of strategy.

In order to communicate with personal assistants we currently have a number of options:

  • Textual and numerical data, formatted or unformatted in multiple languages
  • Voice and video recording or interactive communication with assistants like Apple’s Siri
  • AIM powered chat-boxes for interactive help-services
  • Virtual images and videos as presented by Virtual or Augmented Reality Systems
  • Real-time sensory input from our body (heart rate, temperature, blood pressure, blood sugar etc.)
  • Real-time sensory input monitoring our brains blood flow through stationary fMRI scanners or portable EEG/NIRS sensors that measure electrical and electromagnetic charges of our brain. The EEG/NIRS sensors are mounted around the skull in a noninvasive manner. Interfaced to a computer the data generated by the task being performed is analyzed and used for feedback. This feedback is presented via a display in graphical form to be used for teaching self-regulation of brain functions.

About 80% of the brain`s activity is engaged in visual tasks to store or analyze images. Google and Facebook have made significant progress in their effort to recognize and classify images, for example to define whether an image represents a cat or a dog or to match the image of a face with a specific person.

To enhance its own research effort Google announced in July, 2016 the purchase of the French company Moodstocks to access more talented engineers to work on image recognition. The comment from their CEO:  ‘Ever since we started Moodstocks our dream has been to give eyes to machines by turning cameras into smart sensors able to make sense of their surroundings.’

While the fMRI scanners and EEG/NIRS sensors are noninvasive, new brain-invasive devices are being developed to support the communication between man and machine in a far more direct way. Currently research is focused on dealing with brain injuries, paralysis and brain illnesses such as Alzheimer, Epilepsy or Parkinson’s disease. The following are some examples of progress in this area:

  • Brain Computer Interfaces BCI have been developed to support the movement of artificial limbs such as leg prosthesis bypassing the broken neural channels resulting from an accident paralyzing the legs
  • Research efforts to build invasive brain prosthesis to enhance memory performance for people with Alzheimer are under way, partially funded by DARPA (Defense Advanced Research Projects Agency, USA) or startups like the Kernel Company.
  • If this company succeeds, surgeons will one day implant Kernel’s tiny device in their patients’ brains—specifically in the brain region called the hippocampus. There, the device’s electrodes will electrically stimulate certain neurons to help them do their job—turning incoming information about the world into long-term memories.

personal-assistant-2Image Credit: Ted Berger, Kernel Company

The hippocampus is a key brain region involved in memory formation and storage.


It does not take much to imagine that these new devices will be equipped with WLAN-like communication features to provide external communication control to the brain regions monitored. University of Washington researchers have introduced another variant on Wi-Fi called “interscatter communication” that may allow devices such as brain implants, contact lenses, credit cards and smaller wearable electronics to talk to everyday devices such as smartphones and watches (to report a medical condition, for example). Using only reflections, an interscatter device such as a smart contact lens converts Bluetooth signals from a smartwatch, for example, into Wi-Fi transmissions that can be picked up by a smartphone. The team of University of Washinton electrical engineers and computer scientists has demonstrated for the first time that these types of power-limited devices can “talk” to others using standard Wi-Fi communication. Their system requires no specialized equipment, relying solely on mobile devices commonly found with users to generate Wi-Fi signals using 10,000 times less energy than conventional methods.


As ongoing brain-research locates more precisely human behaviors origin in specific brain regions we can assume that ‘braintech’ will provide specific brain implants to enhance human health and performance. With that in place non-textual information about our emotions for example might be fed directly to our personal assistant for further analysis. This, however, is also the subject of increasing concern in view of potential misuse. The book ‘The Utopia Experiment’ written by Kyle Mills under the brand of Robert Ludlum and published in 2013 describes the devastating impact of brain implants in a scenario where a scientist is attempting to control our society. Only a few years after this book was published the potential reality of the story is by no means science fiction. Rather it provides another strong argument that ethical standards are required to set barriers against the misuse of our Singularity-Ecosystem.

Despite these impressive advances, one fundamental capability remains elusive: the comprehension of language. Systems like Apple’s Siri can follow simple spoken or typed commands and answer basic questions, but they can’t hold a conversation and have no real understanding of the words they use.

There is a problem with applying deep learning to language. Words are arbitrary symbols, and as such they are fundamentally different from imagery. The same word can mean various things in different contexts.

A truly personal assistant, however, has to be able to communicate like a human being in a language that is considered agreeable. Today we lack the algorithms that enable AIMs to handle this task. However, as neuroscience and brain research progress at such a rapid pace we will eventually understand how human language comprehension is built up and maintained.

Despite the difficulty and complexity of the problem, the success researchers have had using deep-learning techniques to recognize images and excel at games like Go provides hope that we might be on the verge of breakthroughs in language, too. As personal assistants become tools that people use to augment their own intelligence as stipulated by Singularity their language capability will be essential.


Leave a Reply

Your email address will not be published. Required fields are marked *