Business Tech5 views6 min read

AI 'Workslop' Is Damaging Trust in the Workplace

A new Stanford study defines 'workslop' as low-quality AI-generated content in the workplace, finding it damages trust and perceived competence among colleagues.

Aria Fowler
By
Aria Fowler

Aria Fowler is a technology and society correspondent for Neurozzio, focusing on AI ethics, digital inclusion, and the impact of algorithms on corporate and social structures. She reports on innovations designed to create more equitable technological systems.

Author Profile
AI 'Workslop' Is Damaging Trust in the Workplace

A new term, "workslop," has been introduced by Stanford University researchers to describe low-quality, AI-generated content that is increasingly appearing in professional settings. According to a recent survey, this phenomenon is already widespread, with more than 40% of full-time employees in the United States reporting they have received such content within the last month.

The study, conducted by the Stanford Social Media Lab in partnership with BetterUp Labs, highlights a growing challenge in the modern workplace. As employees turn to artificial intelligence for assistance, the quality of their output can suffer, leading to negative perceptions among colleagues and raising questions about the true productivity benefits of these powerful tools.

Key Takeaways

  • Researchers have defined "workslop" as AI-generated content that appears complete but lacks the necessary substance to be useful.
  • A survey found over 40% of U.S. full-time workers received workslop in the past month, with it comprising over 15% of the content they review.
  • Sending workslop significantly damages professional reputations, with colleagues viewed as less capable, trustworthy, and intelligent.
  • The technology and professional services industries are identified as being the most affected by this trend.
  • Related research suggests that AI's productivity gains are often offset by the new tasks it creates, such as verification and error correction.

Defining a New Workplace Problem

While many internet users are familiar with "AI slop"—the surge of low-effort, often nonsensical content flooding social media—its professional equivalent now has a formal name. Researchers from Stanford have labeled it "workslop."

This term refers to a specific type of output that has become more common with the rise of generative AI tools in business environments. It is not just poorly written content; it is work that has the appearance of being finished but ultimately fails to be useful.

What is 'Workslop'?

According to the research, first published in the Harvard Business Review, workslop is defined as AI-generated work content that "masquerades as good work but lacks the substance to meaningfully advance a given task." This could include reports with generic information, code with subtle but critical errors, or presentations that miss key strategic points.

The core issue is that this content often creates more work for the recipient, who must spend time identifying the flaws, correcting the information, or redoing the task from scratch. It represents a fundamental misunderstanding of how to use AI as a tool for assistance rather than a replacement for critical thinking.

The Scale of 'Workslop' in US Companies

The findings from the Stanford and BetterUp Labs survey reveal that workslop is not a niche issue but a significant and growing trend. The data indicates a substantial portion of the American workforce is already dealing with this low-quality AI output regularly.

The survey polled a sample of U.S.-based full-time employees, uncovering several key statistics about the prevalence of the issue.

By the Numbers

The study found that, on average, employees estimate that 15.4% of all content they receive from colleagues now qualifies as workslop. This suggests that nearly one-sixth of internal communications and deliverables may be substandard due to improper AI use.

Where Does It Come From?

The data shows that workslop is most frequently exchanged between colleagues on the same level. The survey identified that peer-to-peer exchanges account for the largest portion of this content.

  • Peer-to-Peer: 40% of workslop is handed over between peers.
  • To Managers: 18% of workslop is submitted by direct reports to their managers.

Furthermore, the problem is not evenly distributed across all sectors. The industries most affected by the rise of workslop are technology and professional services, where the adoption of AI tools is often highest. This suggests that early adopters of AI are also the first to experience its potential downsides when used without proper oversight.

The Professional Cost of Submitting Poor AI Work

Beyond creating extra work, the act of submitting workslop has a tangible and detrimental effect on an employee's professional reputation. The survey asked recipients how their perception of a colleague changed after receiving AI-generated content that was low in quality.

The results were overwhelmingly negative, indicating that cutting corners with AI can severely damage trust and respect within a team.

"Roughly half of the people surveyed viewed colleagues who sent workslop as less creative, capable, and reliable than before. Meanwhile, 42% saw them as less trustworthy, and 37% saw that colleague as less intelligent."

These findings highlight a critical social dynamic in the workplace. While AI tools are marketed as productivity enhancers, their misuse can lead to a breakdown in professional relationships. When a colleague submits workslop, it signals to the recipient that they did not invest the necessary effort to ensure quality, offloading the burden of verification and correction onto someone else.

A Breakdown in Trust

The erosion of trust is perhaps the most significant consequence. When colleagues are seen as less reliable and trustworthy, it can harm collaboration, slow down projects, and create a more contentious work environment. Managers who receive workslop from their reports may begin to question their team members' skills and judgment, potentially impacting performance reviews and career progression.

Questioning AI's Net Impact on Productivity

The emergence of workslop aligns with a broader body of research that challenges the narrative that AI is a simple solution for productivity gains. Several recent studies suggest that while AI can automate certain tasks, it also introduces new, often hidden, workloads that can offset those benefits.

Productivity Paradox

A paper from the University of Chicago and the University of Copenhagen illustrated this paradox. It found that while AI might save a worker a few hours on a specific task, that time could be lost on new responsibilities created by AI. For example, a teacher might use AI to create lesson plans faster but then spend an equal amount of time checking student assignments for unauthorized AI use.

This pattern is also visible in technical fields. One study focusing on software developers found that using AI coding assistants could actually slow them down when working on complex problems. The AI-assisted group spent a significant amount of time refining prompts for the AI and meticulously checking its generated code for errors, ultimately negating the time saved from automated code generation.

Workslop is a clear example of this phenomenon. The time an employee saves by quickly generating a report with AI is directly transferred to a colleague who must then spend their own time fixing it. This results in a net productivity loss for the team or organization as a whole, even if one individual feels more efficient.

As organizations continue to integrate artificial intelligence, these findings underscore the importance of developing clear guidelines and training programs. Employees need to understand that AI is a tool to augment human intelligence, not replace it. The ultimate goal is to produce high-quality, reliable work, and the responsibility for achieving that standard still rests with the individual, regardless of the tools they use.