The Power of Babylon Credit: YouTube History Documentary
The enormous progress AI has made in recent years is largely due to machine learning technology, applying neural networks to analyze huge data sets to come up with algorithms for solving specific problems such as image recognition and language translation to name just a few. Neural networks, a software technology developed by Geoffrey Hinton, Professor at the University of Montreal, only partially describe how the human neural network, made up of about 80 billion neurons and trillions of synapsis really works. In contrast to machine learning networks which require enormous computing power and kilowatts of energy, the human brain requires only 20 Watts as energy is used only when neurons fire and communicate with each other. The term ‘Artificial Intelligence’ applied to machine learning technology is strictly speaking not correct. Real neurons as part of our brain are not comparable to neurons described in neural network software.
However as Neuroscience and AI is beginning to merge to come up with a true ‘Artificial General Intelligence (AGI)’ the ‘issue of ethics’ needs new attention. Technological developments in machine intelligence and Neuro-Technology implicate that:
- it will be possible to decode people’s mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions and decisions;
- individuals could communicate with others simply by thinking;
- powerful computational systems linked directly to people’s brains support their interactions with the world such that their mental and physical abilities are greatly enhanced.
Consequently the current efforts to incorporate standards of ethics in AI have to be extended to include Neuro-Technology as well. Calling themselves the Morningside Group, 27 neuroscientists, neuro-technologists, clinicians, ethicists and machine-intelligence engineers are working on an enhanced version of ethical guidelines taking into account both machine learning as well as neuro- technology. These experts conclude that current ethics guidelines for experimenting on people and developing artificial intelligence don’t even acknowledge the dystopian possibilities of neuro-technology. Because “citizens should have the ability — and right — to keep their neural data private,” the Morningside Group states that, “neurorights” should be incorporated into national laws as well as international pledges such as the Universal Declaration of Human Rights. Eavesdropping on thoughts is only the beginning of the alarming possibilities. Researchers are going beyond reading the brain to “writing” it, or activating neurons with an external device in a way that alters circuits, controls thought, and even implants memories.
Such advances could revolutionize the treatment of many conditions, from brain injury and paralysis to epilepsy and schizophrenia, and transform human experience for the better. But the technology could also exacerbate social inequalities and offer corporations, hackers, governments or anyone else new ways to exploit and manipulate people. And it could profoundly alter some core human characteristics: private mental life, individuality and an understanding of individuals as entities bound by their bodies.
Ethics in AI and Neuro-Technology
In an essay published by Nature Magazine November 9, 2017, Rafael Yuste, professor at Columbia University and Sara Goering, associate professor at the University of Washington, both founders of the Morningside Group state four concerns to be addressed by ethics:
Privacy and Consent
An extraordinary level of personal information can already be obtained from people’s data trails. Researchers at the Massachusetts Institute of Technology in Cambridge, for example, discovered in 2015 that fine-grained analysis of people’s motor behavior, revealed through their keyboard typing patterns on personal devices, could enable earlier diagnosis of Parkinson’s disease. A 2017 study suggests that measures of mobility patterns, such as those obtained from people carrying smartphones during their normal daily activities, can be used to diagnose early signs of cognitive impairment resulting from Alzheimer’s disease. Algorithms that are used to target advertising, calculate insurance premiums or match potential partners will be considerably more powerful if they draw on neural information — for instance, activity patterns from neurons associated with certain states of attention. And neural devices connected to the Internet open up the possibility of individuals or organizations (hackers, corporations or government agencies) tracking or even manipulating an individual’s mental experience.
To limit this problem, the Morningside Group proposes that the sale, commercial transfer and use of neural data be strictly regulated. Such regulations — which would also limit the possibility of people giving up their neural data or having neural activity written directly into their brains for financial reward — may be analogous to legislation that prohibits the sale of human organs, such as the 1984 US National Organ Transplant Act.
Free Will and identity
Some people receiving deep-brain stimulation through electrodes implanted in their brains have reported feeling an altered sense of free will and identity. In a 2016 study, a man who had used a brain stimulator to treat his depression for seven years reported in a focus group that he began to wonder whether the way he was interacting with others — for example, saying something that, in retrospect, he thought was inappropriate — was due to the device or his depression or whether it reflected something deeper about himself. He said: “It blurs to the point where I’m not sure … frankly, who I am.” As neuro-technologies develop and corporations, governments and others start striving to endow people with new capabilities, individual identity (our bodily and mental integrity) and free will (our ability to choose our actions) must be protected as basic human rights.
People frequently experience prejudice if their bodies or brains function differently from others. The pressure to adopt enhancing neuro-technologies, such as those that allow people to radically expand their endurance or sensory or mental capacities, is likely to change societal norms, raise issues of equitable access and generate new forms of discrimination.
Moreover, it’s easy to imagine a brain augmentation arms race. In recent years, staffs at DARPA and the US Intelligence Advanced Research Projects Activity have discussed plans to provide soldiers and analysts with enhanced mental abilities (‘super-intelligent agents’). The use of neural technology for military purposes needs to be stringently regulated. Such efforts should draw on the many precedents for building international consensus and incorporating public opinion into scientific decision-making at the national level. For instance, after the First World War, a 1925 conference led to the development and ratification of the Geneva Protocol, a treaty banning the use of chemical and biological weapons. Similarly, after the Second World War, the UN Atomic Energy Commission was established to deal with the use of atomic energy for peaceful purposes and to control the spread of nuclear weapons.
Algorithmic Application Bias
When scientific or technological decisions are based on a narrow set of systemic, structural or social concepts and norms, the resulting technology can privilege certain groups and harm others. A 2015 study found that postings for jobs displayed to female users by Google’s advertising algorithm pay less well than those displayed to men. Similarly, a ProPublica investigation revealed last year that algorithms used by US law-enforcement agencies wrongly predict that black defendants are more likely to reoffend than white defendants with a similar criminal record. Such biases could become embedded in neural devices. Indeed, researchers who have examined these kinds of cases have shown that defining fairness in a mathematically rigorous manner is very difficult.
History indicates that profit hunting will often trump social responsibility in the corporate world. And even if, at an individual level, most technologists set out to benefit humanity, they can come up against complex ethical dilemmas for which they are not prepared. Consequently the producers of neuro-technology devices and machine learning software should be trained to embed ethical codes into their devices and software. A first step towards this would be to expose engineers and academic-research trainees to ethics as part of their standard training when joining a company or laboratory. Employees could be taught to think more deeply about how to pursue advances and deploy strategies that are likely to contribute constructively to society, rather than to fracture it. This type of approach would essentially follow that used in medicine. Medical students are taught to respect patient confidentiality, non-harm and their duties of beneficence and justice, and are required to take the Hippocratic Oath to adhere to the highest standards of the profession.
There are three scenarios that are possible as Singularity arrives: a) super intelligent machines will take over, possibly terminating humanity as we know it today ; b) intelligent machines and neuro-technology will merge with humans, massively enhancing our intelligence; c) humans continue to control intelligent machines, significantly improving the efficiency of our ecosystem without external brain-invasion. Regardless of what scenario develops, without ethical standards and its adaptation by public and corporate law, humanity will possibly cease to exist. Unfortunately at this moment we are ill prepared to live up to this existential threat managing the potential negative impact of Singularity. There is no way to reap its potential benefits without the adaptation and implementation of strong ethics fostering our human values. Due to the exponential speed of scientific advancements, urgent action is needed across all institutions of our democratic society.
I really enjoy reading your articles. Just wondering if I’m the only reader since I see no comments
published ; ). I’m sure or hope your site has a good number of readers. In respect to this article I would
provide some feedback on PR. Humans are since quite some time analysed and grouped in segments for
very effective PR (advertising impact). The conditioning of humans in marketing segments is a result
of culture, education, default human faculties-behavior and recurring ‘hammering of propaganda-brand subconscious dependencies).
Hence, for the majority of humans, tight manipulation, conditioning/segmentation is already happening. AI brain hacks may just confirm or short-cut further such facts. Besides the ‘normalized/conditioned’ majority of humans some borderline groups may sure incorporate much more risks such as sociopath on the evil side and creative human value conscious scientists or artists on the positive side.
I very much appreciate your comments. Indeed there are very few reactions to the essays, partially this has to do with fact that so far I have done very little to push my content. The Website is now about 1 1/2 years old, on the average I get 30 visitors a day, and the number of registered users is getting close to 100. Most visitors come from the US, UK and India, only 18% come from Switzerland.
As I now have begun to act as speaker on issues related to Singularity on public or educational events, the website traffic is picking up. On the PR side I got a lot of attention due to a one page article in the NZZ past July:
The RedBit network reviewed in this article, is working on becoming a Think Tank and as I am one of its active members the PR activities are likely to increase. Nevertheless I also enjoy running the singularity2030 website more or less as a man show. One of the good things is that I practice the discipline producing an essay every 2 weeks, so far 37 essays have been published. I really enjoy writing about one of the most important topics of our age.
With kind regards
many thanks for your reply and I hope you maintain this periodical essays. Besides each individual article value, they represent, due to the periodicy, a metric and measure in time. I see no similar effort elsewhere.
I also watched the interesting video you send the reference to subsribers and when reading the below article were sure reminded again on this video. http://edition.cnn.com/2017/11/29/politics/us-military-artificial-intelligence-russia-china/index.html