Charles Darwin 1809-1882 Credit: theconversation.com
Evolution did not reach the status of being a scientific theory until Charles Darwin published his famous book On the Origin of Species in 1859. He and Alfred Russel Wallace, a scientist working on the same topic, proposed that evolution occurs because of a phenomenon they described as natural selection. According to the theory of natural selection, organisms produce more offspring than the environment can sustain for survival. Those that are physically better equipped to survive grow to maturity and reproduce. Those that are lacking in such fitness, on the other hand, either do not reach an age when they can reproduce or produce fewer offspring than their counterparts. Natural selection is sometimes summed up as ‘survival of the fittest’ because only the ‘fittest’ organisms—those most suited to their environment—are the ones that reproduce successfully and are most likely to pass on their traits to the next generation. This means that if an environment changes, the traits that enhance survival in that environment will also gradually change or evolve. Darwin chose the name ‘natural selection’ in contrast to ‘artificial selection’ which is controlled by humans, for example to crossbreed animals for reaching specific visual or performance levels. He did not know that genes existed, but he could see that many traits are heritably passed from parents to offspring. In the first edition of The Origin of Species, Darwin said little about the brain yet in his notes, he frequently refers to the brain as the organ of thought and behaviour and to heredity of behaviour as being dependent on the heredity of our brain structure. The sixth edition contains a passage, which explicitly states that natural selection applies to the brain as it does to all the other organs in his lifetime. Darwin advocated the idea that the human brain, as any other organ, shared a history with the brains of other animals and were subject to the pressures of natural selection. The discovery of the DNA in 1953 and the ongoing development of gene editing technologies such as CRISPR does not change Darwin’s theory of evolution. Rather it provides an impressive testimony on the progress and impact of scientific research for the prevention and treatment of diseases on a genetic level while raising many ethical issues about potential misuse.
Darwin and the evolution of machines
Darwin’s work coincide with the first industrial revolution around 1800 and the introduction of steam engines and the mechanization of manual work. “Darwin among the Machines” – published in a New Zealand newspaper on June 13, 1863 – draws a critical analogy between machine development and Darwin’s evolution theory. Written by Samuel Butler, a novelist and author, the article raised the possibility that machines had a kind of ‘mechanical life’, undergoing constant evolution and that eventually machines might supplant humans as the dominant species. In his essay he writes: “what sort of creature man’s next successor in the supremacy of the earth is likely to be? It appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying them with all sorts of ingenious contrivances of self-regulating and self-acting power which will be to them what intellect has been to the human race. In the course of ages, we shall find ourselves the inferior race. Day by day, the machines are gaining ground upon us; day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend the machines and more men are daily devoting the energies of their whole lives to the development of mechanical life. It is simply a question of time, but that the time will come when machines hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.” Obviously, Samuel Butler was decades ahead in discussing the potential conflict between natural selection and human’s potential to alter evolution with his own intellect. His views and warnings are mirrored in today’s discussions about the potential threat of AI as advocated by Nick Bostrom, the late Stephen Hawkins, Elon Musk and others.
Industrialization and automation as the driver of evolution
History shows that the progress of industrialization as started around 1800 and still continuing today cannot be stopped. It is closely tied to the basic principles of economics and the value generating cycle of supply and demand with competition driving the market and automation reducing the cost of production. Scientific research and new technologies are a major driver of industrialization. The application of AI to generate knowledge out of massive datasets has accelerated this trend. Intelligence has become a success factor as industrialization has evolved by replacing muscle power with machine-supported cognitive intelligence. At the same time man’s massive exploitation of natural resources has kicked off a backlash of climate problems, threatening human existence and survival across our planet. Yet the need and urge to generate income, fuelled by venture capital and the expectation of huge financial rewards, remains unbroken. The augmentation of human intelligence with machines is gradually impacting our societies as those ahead in developing and applying this technology are more fit for survival than those which stay behind. Evolution is shifting to a selection based on technological supremacy. The upcoming automation of AI and the availability of new brain-interface technology to augment and enhance human’s intellectual capacity stipulate that industrialization still has a long way to go. As a result, we are likely to experience a widening gap in wealth distribution causing social unrest. To prevent this, Samuel Butler’s 1863 essay ends with the following radical advice: ” Every machine of every sort should be destroyed by the well-wisher of his species. Let there be no exceptions made, no quarter shown; let us at once go back to the primeval condition of the human race.”
The automation of AI
Machine learning as a subset of AI has achieved considerable success in recent years and an ever-growing number of disciplines rely on it. However, this success crucially relies on human machine learning experts to perform tasks such as pre-processing and data cleaning, selecting an appropriate algorithmic model or designing the neural network to execute the model. Automatic machine learning (AutoML) is a branch of AI-research devoted to methods and processes that automate machine learning so that non-experts can also reap its benefits. Recently, a team of Google computer scientists working on AutoML, came up with a new version that could automatically generate the best algorithm for a given task. Dubbed AutoML-Zero, the system can adapt algorithms to different types of tasks and continuously improve them through a Darwinian-like evolution process that reduces the amount of human intervention required. Since humans can introduce bias into systems—and thus program their own limitations—the confidence-level of the results one ultimately gets and its value to solve a problem are significantly reduced. Consequently, Google is trying to create a scenario where a computer can roam free and get creative without human bias. Their system uses mathematics, rather than human-designed components, as the building blocks for new algorithms. It starts with 100 random algorithms, generated through a combination of mathematical operations. AutoML-Zero uses the tasks to score each algorithm’s effectiveness in completing a certain objective and then ‘mutates’ the best ones to begin another round. These new ‘child’ algorithms are compared to the original ‘parent’ algorithms to see if they have improved solving the task at hand. The process is continually repeated until the best mutations win out and end up in the final algorithm. Currently, Google’s system can search through 10’000 possible models per second, with the ability to skip over algorithms it has already encountered.
Entering the human brain to enhance intelligence
Neuroscience has gained enormous traction in recent years with thousands of researchers engaged in heavily-funded projects exploring the functionality of the brain. To understand human cognition and behaviour and its relationship to billions of neurons assembled in three pounds of biological mass consuming only 20 Watts remains a challenge which to understand requires new scientific tools and concepts. Elon Musk’s start-up ‘Neuralink’ is working on an invasive brain-computer interface with the potential of augmenting and expanding our intellectual and emotional capacity. Speaking at a 2019 event, he said the firm was working on a device that would provide a direct connection between a computer and a chip inserted within the brain. The technology will first be used to help people suffering from brain diseases like Parkinson’s but the ultimate aim of Neuralink is to allow humans to compete with advanced artificial intelligence, he said. A research paper detailing the device, claims that a single USB-C cable will provide ‘full-bandwidth data streaming’ between the computer and the brain. He has announced that he will reveal new information about the project on August 28, 2020. Pre-releasing some information, Mr. Musk confirmed that Neuralink’s technology could help control hormone levels and use them to our advantage (enhanced abilities and reasoning, anxiety relief, etc.). Neuralink’s chip will be able to cure depression and addiction by ‘retraining’ the parts of the brain responsible for these afflictions as learning to use the device is like learning to touch type or play the piano. Trials have already been carried out on animals and human trials were originally scheduled to take place this year, though details are yet to be made public. The application of brain-computer interfaces (BCI’s) is not a new idea. They have been tested on patients to control epilepsy or physical impairments. Due to material, structural and electrical limitations, the devices currently available have to be removed following a test period of a few weeks. In a paper published by BMC Medical Ethics in January this year, BCI users interviewed appreciated the opportunity to regain lost capabilities as well as to gain new ones despite the risk involved. In their opinion, rather than questioning human nature, BCI-technology can retain and restore characteristics and abilities which enrich our lives.
Adding intelligent devices to the brain or altering our DNA with gene-editing stipulate major sociological and ethical issues. As long as these are dealt with according to existing medical and government-enforced standards, there is hope that legal barriers are in place to prevent misuse. However, totalitarian governments like China might be tempted to misuse this technology to foster their ambition for global supremacy. In contrast democratic societies are challenged to channel the automation and enhancement of intelligence towards expanding human’s creative capacity. As intelligence becomes a machine-supported commodity, globally accessible to everyone, our economic system that is driving industrialization is likely to require a major overhaul. Without balancing the benefits of technological progress in a socio-economic context, the issue of the survival of the fittest will take a destructive turn. Hopefully, common sense – which AI is still lacking so far – will prevail.