‘Workslopv and ChatGPT Cannot Replace Your Own Writing Skills

Posted by Peter Rudin on 31. October 2025 in Essay

Workslop    Credit:tvnewscheck.com

Introduction

It was 2022, just after OpenAI’s release of ChatGPT to the masses, as the Stanford Professor Kate Niederhoffer noticed something was off in the research assignments she was grading. “They looked pretty good, but not quite right.  Because I had 100 students, I could see that other assignments looked exactly the same with the same sort of not-quite-rightness.” The papers in question seemed to have a lot of text without saying anything substantive to advance the work, and they all did so in the same overly wordy style. Niederhoffer felt the same feeling of suspicion when she was once asked to speak about her research that revealed the students did not actually understand their work. She has now a name for this phenomenon, the feeling you get when you are reading a message or document that is so convoluted or incomplete in thought that you start to wonder. It is called ‘workslop’, and it is affecting teams and productivity across all kinds of businesses.

The Problem with ‘Workslop’

AI is supposed to make work easier, but instead it has generated  a new problem called ‘workslop’. The term, also coined by researchers in the latest Harvard Business Review, describes low-quality AI-generated content such as memos, reports  and emails clogging up employees’ lives and wasting their time. According to the report, ‘workslop’ appears polished but lacks real substance. Researchers from Stanford University who collaborated with BetterUp, a leadership coaching platform, observed the following effects caused by ‘workslop’:

Surveying 1150 U.S. adults who described themselves as desk workers about their experiences with ‘workslop’, 40% said that they had encountered ‘workslop’ generated documents within the last month, slowing down their workday. They reported spending an average of 1 hour and 56 minutes dealing with each instance. As a result, ‘workslop’ generates substantial costs. Looking at the average respondent salary, the researchers estimate ‘workslop’ incidents to cost USD 186.- per month. For a large organization, that can add up to more than USD 9 million a year in lost productivity. The researchers also noticed ‘workslop’ in their daily lives as friends, colleagues and families shared frustrating experiences, according to Jeffrey Hancock, director of Stanford’s Social Media Lab. Colleagues look down on ‘workslop’ senders as about half of all those surveyed viewed them as less creative, capable and reliable. Survey respondents also shared examples including health care providers who griped about getting long AI-generated reports from patients that diagnose their health problems using data without any real medical underpinning. Well before the advent of AI, employees were generating poorly constructed memos, PowerPoints and e-mails. Today however, researchers found that the workers reporting the most ‘workslop’ were active in IT-technology, health care and professional services.

The Advantage of Writing

According to Linguist Giorgio Lemmolo, Director of the Language Centre at ETH and the University of Zurich, AI chatbots are taking over writing for students and researchers. “If we leave writing to AI chatbots, we will not just lose a craft and the skills. Writing is thinking, involving cognitive processes. Linguists have shown that thoughts and their expression arise synchronously. Authors who are struggling for words refine their style and they form and reorganise their knowledge in the process. “ I am familiar with this from my own experience. If I cannot put complex ideas into simple words, I have not understood them properly yet. AI can be useful for triggering ideas or gaining new perspectives, but genuine thinking only sets in when we write ourselves”, said Lemmolo. Neuroscience also confirms that writing is thinking. Neuronal connections are formed during writing, which are decisive for abstraction and long-term memory. Writing by hand activates regions of the brain that enable deep learning and conceptual thinking. Consequently, writing helps the brain to recognise major and far-reaching connections and to develop specialist knowledge that not only lives from facts but is also based on understanding and can be applied in many different contexts. Research into automation knows that cognitive systems atrophy when tasks calling for thinking are outsourced to machines. And there is a second stumbling block. The products of generative AI skilfully imitate knowledge, thereby disguising our dwindling skills. We sound eloquent without genuinely comprehending and without realising that we do not understand the issues. If students no longer write themselves, the diversity of perspectives and arguments will also suffer. This should give us pause for thought, because science thrives on the constant flow of new voices, questions and insights. These only arise, however, when people engage with knowledge in depth and in a critical manner. Multilingualism is also a means to this end. People who read and think in several languages approach a topic from different perspectives because languages organise knowledge differently. German, for example, abstracts in complicated sentences. French uses opposites. English makes a linear argument and takes readers by the hand. This is not just a matter of style. It is about different ways of thinking. People who speak several languages think more flexibly. Intellectual diversity gives way to linguistic plainness and simplicity. Above all, however, I would advise universities to resist the urge for efficiency that AI tools like ChatGPT promise. Because in-depth reflection and thinking takes time, and we should take that time despite all the promises of efficiency emanating from AI.

Problems with ChatGPT

ChatGPT’s wrong answers may only be problematic to the extent that they are believed or shared. But a new paper by Kenneth Church, Professor at Khoury College of Computer Sciences, finds college students trust ChatGPT’s responses to a variety of prompts enough to turn them in as homework assignments, even when they include numerous factual errors. “My students could have easily fact-checked their homework, but they chose not to do so,” Church writes in his paper, published in the journal Natural Language Engineering. “They were prepared to believe much of what ChatGPT says, because of how it says what it says and its ease-of-use. It is easier to believe in ChatGPT than to be sceptical.” To Church, the results offer a warning for society about the willingness of people to forego their own due diligence when a quick answer is always one prompt away. ChatGPT proved to be a reliable tool for explaining metaphors and creating certain simple programs. But he found that ChatGPT was amazingly bad at tasks like doing a literature survey on a topic and providing accurate references. A number of students submitted references that did not exist or that linked to the wrong papers. Checking those references would have alerted students to the issue. “I had hoped that the students would do more fact-checking than they did, especially after having discussed machine ‘hallucinations’ in class, but users do not do as much fact-checking as they should,” Church writes. What I am worried about is that it seems like ChatGPT does not really have a good understanding of depth and perspective. Oversimplifying things and only taking one view could end up being really bad for the world. “It seemed ChatGPT was incapable of realizing there could be two views or comparing and contrasting those views. The chatbots will give you one side and imply it is the only side. ”Many people find faults in ChatGPT’s answers and point fingers at its parent company OpenAI. But maybe we should be more concerned with its response to questions that have no right answers, only conflicting viewpoints with nuanced interpretations that should be evaluated independently. In such instances, errors are harder to catch, and users should bring a healthy scepticism to ChatGPT’s outputs in a similar manner to how they might interpret answers from a Wikipedia page.

Conclusion

The emergence of AI ‘workslop’ is a critical reminder that technology is only as effective as its application. In a world increasingly shaped by AI, particularly in high-stakes environments like cryptocurrency development and trading, the quality of our digital output directly impacts our success. By proactively addressing ‘workslop’, organizations can unlock the true potential of AI, transforming it from a source of frustration into a powerful engine for innovation and efficiency. Embracing thoughtful AI integration is not just about adopting new tools. It is about cultivating a culture where technology serves human ingenuity, not the other way around. This proactive stance is essential for any entity aiming for successful digital transformation in the AI era.

Leave a Reply

Your email address will not be published. Required fields are marked *