Picture Credit: Wikimedia Commons
Introduction
With the implementation of neural network software mimicking the human brain, artificial intelligence (AI) has made huge progress in recent years. With the introduction of machine learning algorithms, artificial intelligent machines (AIMs) are successfully used for image- and voice recognition, language translation, big data analytics for decision making, sensory input interpretation to guide cars and robots etc.
Whereas developments in technology have been largely driven by economics and market demand governed by regulations concerning product safety and environmental issues, developments in medical science and health care must also consider ethical standards.
Research and innovation are particularly difficult to govern because they create novelty and surprise. The implementation of technology coupled with neuroscience is a complex, open-ended and unpredictable process. The full extent of the risks and side-effects will only be known by experience; and by that time they may be irreversible due to their magnitude or their entrenchment into societal infrastructures or human culture. Political and regulatory action has to include an element of anticipation, acting upon sociotechnical imaginaries and visions.
That such visions might produce entirely different results can be demonstrated comparing the famous literary works of Aldous Huxley’s novel ‘Brave New World’ published in 1932 and George Orwell’s novel ‘1984’ (big Brother is Watching You) published in 1949.
Social critic Neil Postman contrasted the worlds of ‘1984’and ‘Brave New World’ in the foreword of his 1985 book Amusing Ourselves to Death. He writes:
What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egotism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture. In short, Orwell feared that our fear might ruin us. Huxley feared that our desire might ruin us.
As AI invades our personal integrity be it through behavioral data collected based on internet transactions or be it through brain interfaces and mind controlled prosthetics we have reached a crossroad in ethics both on a technological as well as a medical and neuroscientific track.
Ethics
Ethics or moral philosophy investigates the questions “What is the best way for people to live?” and “What actions are right or wrong in particular circumstances?” In practice, ethics seeks to resolve questions of human morality by defining concepts such as good and evil, right and wrong, virtue and vice, justice and crime.
According to Wikipedia the ethics of neuroscience concerns the ethical, legal and social impact of neuroscience, including the ways in which neuro-technology can be used to predict or alter human behavior. Some neuro-ethics problems are not fundamentally different from those encountered in bio-ethics. Others are unique to neuro-ethics because the brain, as the organ of the mind, stipulates broader philosophical concerns, such as the nature of free will, moral responsibility, self-deception, consciousness and personal identity.
On December 1999 the ‘Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine’ went into force. It draws on the principles established by the European Convention on Human Rights. This international treaty also referred to as the Oviedo Convention is a manifestation of the effort on the part of the Council of Europe to keep pace with developments in the field of biomedicine while adhering to ethical principles. The Convention is the first legally-binding international text designed to preserve human dignity, rights and freedoms, through a series of principles and prohibitions against the misuse of biological and medical advances. There are at least three articles within the convention that directly relate to the application of neuroscience and artificial intelligence. They state the following:
Article 1: Dignity, identity and respect of human integrity
Dignity is difficult to define; however enforced or unnoticeable persuasion or personality alteration must be seen as violating human dignity and integrity.
Identity can be threatened individually and collectively by persuasive and personality-altering technologies that could lead to human alterations or even the emergence of new life-forms that couple machines, bodies and consciousness.
Article 2: Primacy of the human being
This article becomes applicable as governmental and corporate institutions perform research on humans to develop new technological opportunities for mass and individual persuasion. It also applies to experimental research on human subjects, in particular when performing interventions on the human brain where side effects of these risky interventions can be expected.
Article 10: The Right for Private life
What can be learnt from the emergence of mass data collection, often blurring the line between medical and non-medical data, is that the protection of this data is vital to exercise and uphold a private life. The current collection of comprehensive and linked “big” data sets via social media, the internet of things and other devices may constitute a threat to the right for private life.
Brain Computer Interface (BCI)
Brain Computer Interface (BCI) technology is applied when a machine is connected to or directly controlled by the activity of the human brain. BCI technology is suited for use as treatment or prosthetics that overcome physical limitations resulting from injury or illness. However BCI technology can also be used to enhance the capabilities of humans in respect to cognitive intelligence or memory retention.
BCIs can be applied to the brain in a non-invasive matter, for example through EEG devices where external electrodes are attached to the brain. Invasive BCI in contrast is any system where the interface device is physically implanted into the brain and where neuron-impulses can be used to communicate with the external device of the BCI system.
Silicon Valley entrepreneur Elon Musk, the founder of Tesla and other visionary ventures recently stated that humans have to merge with machines in order to stay relevant.
Following this vision he has launched a California-based company called Neuralink Corp. to pursue “neural lace” brain-interface technology, implanting a digital layer above the cortex which acts as interface to external AIMs. The Kernel Company, also venture capital funded, is another player to enhance human intelligence with an interface to the human brain. The interfaces proposed by Neuralink and Kernel are bidirectional; hence data can also be fed to the brain enhancing or augmenting its memory and intelligence capacity.
In our present-day economy human intelligence is tightly coupled with employment status and income. The smarter you are the bigger the opportunity for financial wealth and higher social status. Consequently to enhance one’s intelligence with external means is tempting. Human performance enhancements can also be accomplished by pharmaceutical means. However the potential impact of BCI’s far exceeds what legal or illegal drugs can accomplish. BCI based ‘Brain Doping’ reaches a new dimension of human performance enhancement that goes way beyond our current legal framework of applying drugs or psychotherapy to fight mental and/or performance problems.
Conclusion
BCI’s as envisioned by Elon Musk add a new dimension in the debate about ethics and AI. As artificial intelligence will eventually match human intelligence, we need to decide to either maintain the current differentiation where AIMs and humans communicate through our existing ‘slow’ sensory interfaces (eyes, hands, ears, voice) or to follow Elon Musk’s advice to merge with AIMs through direct ‘high speed’ BCI’s in order to stay at par with the coming age of ‘Superintelligence’ as outlined by the philosopher and transhumanist Nick Bostrom in his book published in 2014.
The Oviedo Convention with the 3 articles listed provides a good starting point for the urgently needed debate about ethics and AI. Recognizing the intent of these articles, it seems clear that Elon Musk’s BCI concept violates the Oviedo Convention.
BCI’s for human performance enhancement represent just one issue regarding AI ethics. There are other issues, some of which relate to the Oviedo Convention as well:
- Privacy and personal data protection
- Ethical standards applied to machine learning software
- Data integrity and trustworthiness to avoid misinformation
- Unrestricted democratic access to AI based human performance enhancements
We have reached a crossroad in ethics where standards and the means for governance of AI and neuroscience need to be discussed at the highest possible level. However, to advance this debate we urgently need a vision of how humanity will exist in the forthcoming age of singularity. Being scared by science-fiction is of little help unless we want to be doomed to self-destruction.