ChatGPT: Towards a New Industrial Age or Plenty of Hype?

Posted by Peter Rudin on 10. February 2023 in Essay

ChatGPT in action                          Credit: voicebot.ai

Introduction

According to reporting from Reuters, OpenAI’s ChatGPT is now the fastest-growing app in human history, reaching an estimated 100 million active monthly users in just two months since its November release. Analysts believe that the viral launch of ChatGPT will give OpenAI a first-mover advantage against other AI-companies. The growing usage has also provided valuable feedback to help train and improve the chatbot’s responses. However, there are growing doubts among AI-researchers that the current hype overstates its capability. At the same time there is also agreement that ChatGPT can be quite useful if its output is reviewed or edited.

Definition

ChatGPT is based on a Generative Pretrained Transformer (GPT) language model that uses deep learning to generate human-like text. These models are called ‘generative’  because they generate new text based on the input they receive. Transformer-based  generative AI is also considered a stepping-stone to new applications way beyond typical natural language processing tasks such as language translation, text summarization and text generation. The types of usage currently under discussion include new architectures of search engines, explaining complex algorithms, creating personalized therapy bots, helping to build apps from scratch, explaining scientific concepts or writing college essays to name just a few.

Realizing the concept of a new method in human-machine based cooperation, some researchers claim that generative AI will also support the creative process of artists and designers. Existing tasks will be augmented by generative AI-systems, speeding up the ideation and the creation phase. Beyond this new functionality, generative AI-models can also support transformative capabilities required for solving complex problems in computer engineering. For example, the Microsoft owned-

GitHub company suggests code and assists developers in autocompleting their programming tasks. The system has been quoted as autocompleting up to 40% of developers’ code, thereby improving the workflow and reducing the associated cost of coding.

The battle of two Tech-Giants

ChatGPT is a potential threat to Google’s search-engine business as it impacts its major revenue stream. According to the Analytics company StatCounter, Google’s current worldwide market share of search is 92.5% compared to Bing’s 3% from Microsoft. The New York Times reported that ChatGPT’s release prompted a ‘code red’ from Google’s management because of its potential to upend the decades-old, ad-sponsored search engine business. As result a flood of new transformer-enabled tools is anticipated. For example, Google is expected to announce a new text-to image-tool called ‘Muse’. “We consider Muse’s decoding process analogous to the process of painting — the artist starts with a sketch of the key region, then progressively fills the colour, and refines the results by tweaking the details,” a research scientist at Google said. Muse will compete heads-on against OpenAI’s highly successful DALL·E 12-billion parameter version of GPT-3 trained to generate images from text. In addition, Google is working on a ChatGPT competitor called ‘Bard,’ according to a blog post just published by CEO Sundar Pichai.

Late January 2023, Microsoft announced a new multiyear, multibillion-dollar investment in OpenAI. The investment is the third phase of the partnership, following Microsoft’s previous investments in 2019 and 2021. Microsoft declined to provide a specific dollar amount, but the Analytics Company Semafor reported earlier this month that Microsoft was in talks to invest as much as $10 billion. In a press release, Microsoft said the renewed partnership will accelerate breakthroughs in AI and help both companies commercialize advanced technologies. Moreover, OpenAI announced a $20 monthly service subscription, initially for users in the United States only. This should provide a faster more stable service as well as the opportunity to try new features first, a spokesman said.

At the World Economic Forum (WEF) in Davos, Microsoft’s CEO Satya Nadella made the point that a new generation of AI platforms with enormous business potential is emerging, providing  services for search engines, social networks and digital clouds. The wealth generated by businesses which know how to make the most of these technologies will have a cascading effect. Enforcing this statement Nicole Sahin, CEO of a global recruitment company, made the point that instead of employing five software engineers to write code, it will only take one good engineer to review what an AI-tool suggests. In his opinion the consequences are quite unpredictable. Transformer-technology is accelerating exponentially while competition among tech-giants is intensifying with no end in sight.

Is ChatGPT Overhyped?

In a recent  article ChatGPT is ‘not particularly innovative,’ and ‘nothing revolutionary’, says Meta’s chief AI scientist | ZDNET, Yann LeCun stated that there are half a dozen Start-ups that have very similar technologies. GPT-3 is composed of multiple pieces of technology developed over many years by many parties.  “ChatGPT uses transformer-architectures that are pre-trained in a self-supervised manner. But self-supervised learning is something I have been advocating for a long time, even before OpenAI existed.” LeCun also remarked that transformers are a Google invention, referring to the language neural net model unveiled by Google in 2017. This model has become the basis for a vast array of language-programs, including GPT-3. The work on these programs goes back decades. The first neural net language model – at that time it was large, by today’s standards it is tiny – was developed by Yoshua Bengio, head of Canada’s MILA institute for AI, about 20 years ago. Bengio’s work on the concept of ‘attention’ was later picked up by Google and became a pivotal element in all language models. According to LeCun, ChatGPT makes extensive use of a technique called ‘reinforcement learning through human feedback’, where humans rank output of the machine thereby improving it, much like Google’s Page Rank for the web. “That approach was pioneered not at OpenAI but at Google’s DeepMind unit,” he said. In his view the ChatGPT software is less a case of scientific breakthroughs than it is an example of decent engineering. “It is well put together, but it is not revolutionary, although that is the way it is perceived in the public,” LeCun said.

Next to LeCun’s view, one should keep in mind that ‘Knowledge’ and ‘Meaning’ are two different concepts and are part of a hierarchy that starts with data from which information is extracted, thereby creating knowledge and wisdom at the highest level of the hierarchy. Transformer-based  tools like ChatGPT still have a long way to go and it is questionable that they ever achieve wisdom  – the highest form of human intelligence.

Problems Ahead

ChatGPT’s ability to imitate the way real people talk and write has sparked concern about its potential to replace professional writers or do students’ homework. Moreover, as generative AI has sparked a new wave of artificial creativity, there are rising concerns about its impact on society. Well known artist Carson Grubaugh shares this concern and predicts that large parts of the creative workforce, including commercial artists working in entertainment, video games, advertising, and publishing, could lose their jobs because of generative AI. Besides profound effects on tasks and jobs, generative AI models have raised alarm in the AI governance community. One of the problems with large language models is their ability to generate false and misleading content. Researchers from Meta have trained a generative transformer with 48 million articles to summarize academic papers, solve math problems, and write scientific code. The system was taken down after less than three days of being on-line. The users realized that the system was producing incorrect results, misconstruing scientific facts and knowledge. More alarming are systems with capabilities advanced enough to render obsolete the  The Turing Test , which tests a machine’s ability to exhibit intelligent behaviour similar to or indistinguishable from a human. This test was once considered the ‘holy grail’ of behavioural research at a time where the internet did not exist. Today’s capabilities of ChatGPT can be misused to generate fake news and disinformation across internet-connected global platforms and ecosystems. Because large language models need to be trained on massive datasets represented by books, articles and websites, these sources of knowledge may be biased. Despite substantial reductions in harmful and untruthful outputs with the support of human analysis and feedback, OpenAI acknowledges that their models can still generate toxic and biased outputs.

Conclusion

While generative AI is a game-changer in numerous areas and tasks, there is a strong need to govern the diffusion of these models, and their impact on society and the economy. The discussion between centralized and controlled adoption with firm ethical boundaries on one hand versus faster innovation and decentralized distribution on the other, will be important for the development of a generative AI community in the coming years. The issues to be solved include disruption of labour markets, legitimacy of scraped data, licensing, copyright and potential for biased or otherwise harmful content, misinformation, to name just a few. Only when solid checks and balances are in place can a thoughtful and beneficial expansion of generative AI technologies be achieved. Until such time the current ChatGPT-hype is likely to cool down as negative as well as positive media coverage erodes the trust needed  to advance this technology. Only sustainable user-acceptance will prove that humanity has indeed reached a revolutionary level of a new industrial age.

Leave a Reply

Your email address will not be published. Required fields are marked *