Tower of Babel and the confusion of languages. Painting by Pieter Bruegel (1563)
Introduction
When Norbert Wiener, the father of cybernetics, wrote his book ‘The Human Use of Human Beings’ in 1950, vacuum tubes were still the primary electronic building blocks, and there were only a few actual computers in operation. But he imagined the future we now contend with in impressive detail. More than any other early philosopher of artificial intelligence, he recognized that AI would not just imitate—and replace—human beings in many intelligent activities but would change human beings in the process. “We are not stuff that abides, but patterns that perpetuate themselves and pretty soon we become so dependent on our new tools that we lose the ability to thrive without them”, he wrote. The real danger is that intelligent machines, though helpless by themselves, may be used by a human being or a block of human beings to increase their control over the rest of the race. The internet-based distribution of fake-information with text, images and video represents one of the biggest threats to our society as we no longer can distinguish between reality and artificially produced ‘deep-fakes’, opening the door to massive manipulation as demonstrated by Facebook’s Cambridge Analytica scandal. Following concerns over the remarkable quality of AI generated images and videos using Generative Adversarial Networks (GANs), the non-profit artificial intelligence research company OpenAI has decided not to release a new language model to generate artificial text.
OpenAI has built a text generator it considers too dangerous to be released
According to an announcement issued in mid-February 2019, OpenAI, claims that their new language model called GPT-2 is so good at generating convincing, well-written text that the company is worried about potential abuse. The model was trained to predict the next word in a sample of 40 gigabytes of internet text. GPT-2 is a large transformation-based model with 1.5 billion parameters, trained on a dataset of 8 million web pages. It is trained with a simple objective: predict the next word, given all of the previous words within some text. The model is chameleon-like — it adapts to the style and content of the conditioning text. This allows users to generate realistic and coherent text-blocks on the topic of their choosing.
But with every good application of the system, such as bots capable of better dialog and better speech recognition, OpenAI found several more troublesome applications, like generating fake news, impersonating people, or automating abusive or spam comments on social media. For that reason, OpenAI announced that it is only releasing a smaller version of the language model, citing its charter, which notes that the organization expects that “safety and security concerns will reduce our traditional publishing in the future”. According to Jack Clark, policy director at OpenAI, the organization’s priority is “not enabling malicious or abusive uses of the technology,” calling it a “very tough balancing act for us”.
These findings, combined with earlier results on synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns. The public at large will need to become more sceptical of text they find online, just as the ‘deep-fakes’ phenomenon calls for more scepticism about images. Today, malicious actors — some of which are political in nature — have already begun to target profiled individuals, using robotic tools, fake accounts and dedicated teams to troll individuals with hateful commentary or smears that make them afraid to speak, or difficult to be heard or believed.
The spiral of increasing AI complexity
Just one month after OpenAI’s announcement, a team of scientists from the MIT-IBM Watson AI Lab and Harvard University have created an algorithm called GLTR that determines how likely it is that any particular passage of text was written by a tool like GPT-2. GLTR uses the exact same models to read the final output to predict whether it was written by a human or GPT-2. Just as GPT-2 writes sentences by predicting which words ought to follow each other, GLTR determines whether a sentence uses the word that the fake news-writing robot would have selected. “We make the assumption that computer generated text fools humans by sticking to the most likely words at each position”, the scientists behind GLTR wrote in their blog post. “In contrast, natural writing actually more frequently selects unpredictable words that make sense to the domain. In this way we can detect if a text was generated by a human.”
To apply AI in order to control AI in distributing misinformation and fake news, increases the complexity of a problem which basically can only be resolved at the source, where malicious content is generated. With billions of messages generated and distributed via internet each day in many languages across the globe, we are drowning in data and increasingly complex algorithms while possibly being targeted by intelligent artificial agents to influence our decision-making. Rising complexity decreases transparency and increases the potential of errors being induced to the system. Fuelled by a loss of trust by the users, AI complexity might eventually reach a point of diminishing returns, causing social unrest and economic downturn.
Are we on the path towards a collapse of our civilization? A historian’s view.
Great civilizations are not murdered. Instead, they take their own lives. Arnold Toynbee, a highly respected British historian who died in 1975, came to this conclusion in his 12-volume book series ‘A Study of History’, exploring the rise and fall of 28 different civilizations. Civilizations are defined as a society with agriculture, production facilities, multiple cities, military dominance in its geographical region and a continuous political structure. According to his study, Toynbee concluded that the average lifespan of a civilization was about 337 years and that civilizations are often responsible for their own decline. However, their self-destruction is usually assisted. The Roman Empire, for example, was the victim of many ills including overexpansion, climatic change, environmental degradation and poor leadership. It covered 1.9 million sq. miles in 390. Five years later, it had plummeted to 770,000 sq. miles and by 476, the empire’s reach was zero. Finally, it was brought to its knees when Rome was sacked by the Visigoths in 410 and the Vandals in 455.
Studying the demise of historic civilizations can tell us how much risk we face today, says ‘collapse- expert’ Luke Kemp, a researcher based at the Centre for the Study of Existential Risk at the University of Cambridge. According to his analysis, the signs of the collapse of our western civilization are worsening. Collapse can be defined as a rapid and enduring loss of population, loss of identity and increasing socio-economic complexity as public services crumble while government loses control over its monopoly on warfare. Virtually all past civilizations have faced this fate. Some recovered or transformed, such as the Chinese. We may be more technologically advanced now. But this gives little ground to believe that we are immune to the threats that undid our ancestors. Scientific and technological progress spurred by AI results in unprecedented challenges such as the potential loss of white-collar jobs or manipulation in decision-making. Our tightly-coupled, globalised economic system is, if anything, more likely to make crisis spread. While there is no single accepted theory for why collapses happen, historians, anthropologists and others have proposed various explanations, including:
CLIMATIC CHANGE: When climatic stability changes, the results can be disastrous, resulting in crop failure, starvation and desertification. The collapse of the Anasazi, the Mayan, the Roman Empire, and many others have all coincided with abrupt climatic changes, usually resulting in droughts.
EXTERNAL SHOCKS: War, natural disasters, famine and plagues can be the cause for collapse. Most early agrarian states were fleeting due to deadly epidemics. The concentration of humans and cattle in walled settlements with poor hygiene made disease outbreaks unavoidable and catastrophic.
INEQUALITY AND OLIGARCHY: Wealth and political inequality can be central drivers of social disintegration. This not only causes social distress, but handicaps a society’s ability to respond to ecological, social and economic problems.
COMPLEXITY: Societies are problem-solving collectives that grow in complexity in order to overcome new issues. However, the returns from complexity eventually reach a point of diminishing returns. After this point, collapse will eventually ensue.
Conclusion
In theory, a civilization might be less vulnerable to collapse if new technologies can mitigate against pressures such as climate change. Our technological capabilities may have the potential to delay collapse. However, the world is now deeply interconnected and interdependent. In the past, collapse was confined to regions – it was a temporary setback, and people often could return to agrarian lifestyles following the collapse. Today, societal collapse implies a far more destructive threat. The weapons available to a state, and sometimes even terrorist groups range from biological agents to nuclear weapons to cyber-warfare. Additionally, new instruments of violence, such as lethal autonomous weapons, will be available in the near future.
The most dangerous threat, however, comes from the exponentially rising complexity induced by AI in combination with the rise of inequality and oligarchy by tech-giants such as Facebook, Google and Amazon.
AI is no longer a single technology with clear boundaries. AI today is everywhere, and its application is spreading at an exponential rate. Humans have one unique ability to stay on top of AI complexity: Consciousness and Self-Reflection. Once we recognize that people are starting to make life-or-death decisions largely on the basis of “advice” from AI systems whose inner operations are unfathomable and not transparent, we have reason to demand that those who in any way encourage people to put more trust in these systems than they deserve should be held morally and legally accountable. If our governments fail to implement corresponding laws, we might well be on a bumpy ride to the next societal collapse.
At long last some of those enthousiastic engineers and technicians are
waking up to the dangers that come along with their technical progress
and new inventions. It is up to the experts to push for ethical
guidelines within politics – us normal lay people and politicians lag
too far behind the development speed of the new discoveries and
possibilities that have so far only been greeted with unconditional
cheers by the experts of the technical elite….
Instead of trying to copy human beings and seperate the highest
mysteries of human consciousness to 0 and 1, to bits and bites…, it
is high time that human beings – ethical human beings – take the lead
instead of leaving the lead to AI…..!
This is an extremely important article!! It’s a critical warning against what we wish to believe otherwise. From technology as handmaiden, to technology as master and executioner. All technology begins innocuous until it doesn’t.
“Terminator X” , though entertainment, is a warning shot with a serious subtext.
But, live for today and reckon not the cost.
I’m not familiar with Mr Rudin’s work. So, I won’t assume that ChatGPT was involve in creating a head fake warning in order to establish a counter balance to my question below. (BTW…….. I want to compliment the article, as a matter of fact.)
I wonder, if asked to write on the theme of “OAI and the decline of Western Civilization,” would artificial intelligence come to some of the same conclusions put forth here? Anything short of “yes” convinces me it’s all bad news for mankind. I haven’t the skill to conduct tests regarding my own question. I’m really talking to myself I presume. Human logic is all I can depend on.
The truth (my truth,) is that an agrarian lifestyle adopted globally is the solution to poverty, starvation, climate stabilization, and war.
Is Artificial Intelligence going to enable what was formally a last resort for survival, or simply make it much easier for greed to destroy most, (if not all,) life on earth? I can’t say I’d bet on the former.