Researching Brain Activity Credit: sciencenews.org
The brain is our most important organ and evolution has gone to great lengths to protect it from damage. The most obvious is our 7mm thick skull. However, the brain is also surrounded by a cerebrospinal, protective fluid and a protective membrane called the meninges. Another protective element is the blood–brain barrier. As the name suggests, this is a barrier between the brain’s blood vessels (capillaries) and the cells and other components that make up brain tissue. Whereas the skull, meninges and cerebrospinal fluid protect against physical damage, the blood–brain barrier provides a defence against disease-causing pathogens and toxins that may be present in our blood. To observe, analyse and understand the functionality of the human brain with its 80 billion neurons and trillions of synaptic connections one must overcome these protective barriers. Over the last two decades scientists and engineers have developed and improved devices to comprehend or influence brain activity mostly for health reasons such as preventing epileptic attacks or restoring functionality of paralysed legs or arms. So-called Brain-Computer-Interfaces (BCIs) read and interpret brain activity, communicating the results to an external computer for further processing. To handle the massive influx of data, AI is used to detect neural patterns and provide feed-back. Broadly speaking the distinction is made between Invasive and Non-Invasive BCIs for defining how connectivity to the brain is established.
In 2017 Elon Musk started his biotech company Neuralink with the goal of positioning an exceptionally fine mesh of conductive material under the human skull for measuring the activity of neurons with an external AI-system. The technique of placing sensors into specific brain-regions to help patients with epileptic health problems is already established and, as is the case of Neuralink, requires invasive surgery. The plan is to position a small emitter/receiver connected to the mesh behind the ear, transferring neural activity data to the AI-processor via WLAN. So far tests have been conducted with pigs with an emitter/receiver mounted on top of the skull. Another less invasive form of BCI technology has recently been announced by researchers from Caltech, providing a device that emits and controls the direction of ultrasound, capturing signals from the brain region under study. To perform this task, the emitter/receiver is placed directly under the skull without invading the brain-tissue. Functional ultrasound works similarly to non-invasive functional MRI (fMRI) but at a far higher resolution—roughly ten times, the authors said. The following describes some of the invasive BCI-applications currently under development:
Neuroprosthetic limb rehabilitation:
BCIs with 3D spatial control of robotic arms support grasping objects or self-feeding, i.e. picking-up a cup of coffee. The activity of the brain is recorded , supporting the individual imagining that he is moving his hand and transferred as a command to the robotic arm.
Rehabilitation of somatosensation (i.e. touch):
Signals in our skin convey information regarding the location of contacts as well as the forces exerted on the skin when we grasp an object. Given the importance of somatosensation, the development of bidirectional, AI supported BCIs is essential in advancing limb rehabilitation.
Rehabilitation of vision:
Visual prosthetic development has one of the highest priorities in the field of biomedical engineering.
Rudimentary vision can be achieved by converting images into binary pulses of electrical signals and delivering them to the visual cortex as artificial neural networks may provide new methods for mimicking the natural visual system via BCIs.
The acceleration of non-evasive BCI-development
Brain activity can be analysed through electrical sensors (EEG), mounted externally in head bands or caps with built-in prepositioned sensors, measuring the amount of electrical energy emitted by the brain. Several interactive applications have been developed to stimulate and/or train brain activity with Neurofeedback to overcome behavioral problems like claustrophobia or hyperactivity. Measuring blood-flow around the brain with functional Magnetic Resonance Imaging (fMRI) technology provides more accurate results, however the sheer size and weight of the equipment limits its use for brain-behavioral research. In contrast EEG devices have the advantage that they are portable with the computer directly attached to the device. Optogenetics, invented in 2005 by Professor Karl Deisseroth and his team at Stanford University, provides another revolutionary path towards understanding the functionality of the human brain. Genetically inserted light-sensitive proteins, called opsins, are inserted into neurons. When triggered with light these proteins cause neurons to fire, similar to the way neurons naturally fire. Hence, light can act as an artificial trigger for activating or inhibiting specific neurons without physically placing sensors into the brain.
Ethical issues related to BCI-applications
In an essay published by Nature Magazine November 9, 2017, Rafael Yuste, Professor at Columbia University and Sara Goering, associate professor at the University of Washington, both founders of the Morningside Group, state several concerns related to ethics in respect to BCI-applications:
Privacy and Consent
The informational richness of brain recordings holds the potential of encoding highly private and sensitive information about individuals, including predictive features of their health status and mental states. Decoding such private information is anticipated to become increasingly easy due to advances in sensor capability, spatial resolution of recordings, and machine learning techniques for pattern recognition and feature extraction. An extraordinary level of personal information can already be obtained from people’s data trails. Researchers at MIT, for example, discovered in 2015 that people’s motor behavior, revealed through their keyboard typing patterns on personal computers, could enable earlier diagnosis of Parkinson’s disease.
Free Will and identity
Some people receiving deep-brain stimulation through electrodes implanted in their brains have reported feeling an altered sense of free will and identity. In a 2016 study, a man who had used a brain stimulator to treat his depression reported that he began to wonder whether the way he was interacting with others — for example, saying something that, in retrospect, he thought was inappropriate — was due to the device or his depression or whether it reflected something deeper about himself. He said: “It blurs to the point where I’m not sure who I am.” As neuro-technologies develop, individual identity (defined as our bodily and mental integrity) and free will (defined as our ability to choose our actions) must be protected as a basic human right.
Merging biological-intelligence with machine-intelligence
At the 2017 World Government Summit in Dubai, Elon Musk suggested that humans should merge their biological-intelligence with machine-intelligence to enhance their cognitive and sensory abilities, fighting-off the potential supremacy of super intelligent machines. Overcoming the slow process of typing, for example, one’s thoughts could be directly transferred via BCI from the brain to external memory, far exceeding our manual communication capacity. Researchers at the University of Miami have just published a paper In vivo contactless brain stimulation via non-invasive and targeted delivery of magnetoelectric nanoparticles | bioRxiv , concerning a novel class of ultrafine particles called Magnetoelectric Nanoparticles (MENs) which can be injected non-invasively into the blood flow, passing the brain-blood barrier for bidirectional monitoring and control of individual neurons based on the electromagnetic properties of these particles. Still one to two decades away for practical use by humans, this merger of machine-intelligence and neurotechnology raises issues which go beyond ethical guidelines. In response, the Morningside Group proposes that the sale, commercial transfer and use of neural data be strictly regulated. Such regulations — which would also limit the possibility of people giving up their neural data or having neural activity written directly into their brains for financial reward — may be analogous to legislation that prohibits the sale of human organs as stipulated by the 1984 US National Organ Transplant Act. Following this line of thought, one can also imagine a brain-augmentation arms race. In recent years, staffs at DARPA and the US Intelligence Advanced Research Projects Activity have discussed plans to provide soldiers with enhanced mental abilities, turning them into ‘super-intelligent agents’. Hence, tough legal constraints adapted globally, need to be enforced to regulate the potentially positive as well as the potentially negative implications of neural augmentation along the following guidelines:
a) potentially positive implications
Achieving Artificial General Intelligence (AGI) remains the holy grail for the AI-research community. Applying non-invasive BCI technology facilitates the development of knowledge of our brain’s functionality and hence the modelling of neural activities related to cognitive and behavioral tasks.
Individuals’ memory, attention spans and cognitive performance can be improved. The monitoring of excessive stress or cognitive workload is facilitated, improving task performance.
The activity of robots can be guided through thoughts without manual intervention, facilitating production processes and testing of new procedures in product development.
Faster information sharing and improved situational awareness leads to more rapid and accurate decisions. In an economic context this can lead to a significant competitive advantage. In respect to health, better analysis and prediction for treatment will improve recovery.
b) potentially negative implications
BCI technology can be misused for totalitarian control of people and organisations, extending supervision to new levels beyond the current applications of face recognition and motion tracking.
Manipulating brain activity:
By decoding people’s mental processes and manipulating the brain’s neural mechanisms, underlying intentions, emotions and decisions can be externally influenced both individually as well as organizationally.
Unequal access to BCI technology, widening existing social and economic inequities:
BCI enhanced humans and organizations can outperform normal humans. The pressure adapting to advancements in neuro-technologies – like those that allow people to radically expand their endurance or sensory and mental capacities – is likely to change societal norms, raising issues of equitable access and generating new forms of discrimination.
Are we heading towards a new Dystopia?
While Utopia defines a place, state or condition that is perfect in respect to politics, laws and customs, Dystopia – as the opposite – defines a futuristic, imagined universe in which oppressive societal control and the illusion of a perfect society are maintained through bureaucratic, technological or totalitarian control. Orwell’s ‘1984’, first published in 1949, illustrated the danger of interventions by government’s mass surveillance. Orwell feared that we would become a captive culture and that fear will ruin us as we will be overcome by externally imposed oppression. Huxley’s ‘Brave New World, first published in 1932, focuses on our individual identity. By removing the family structure, people are conceptually nothing more than a product of the state. As he saw it, people will come to love their oppression and adore the technologies that undo their capacities to think. Many of the concerns raised by Huxley and Orwell have become reality: totalitarian supervision, identity loss through emotional brain-hacking, fake information misguiding decision-making, fear of cyberattacks and so on. The negative implications of non-invasive BCIs was not yet an issue at the time these books were written. Adding that to the equation, however, and considering what has become reality from past dystopian visions, there are good reasons to be alarmed. It will take courage and foresight to analyse the potential consequences and to implement corrective actions before a new, far more destructive dystopian scenario takes hold.