Researchers from BetterUp Labs, in partnership with the Stanford Social Media Lab, have introduced a new term to describe a growing workplace issue: "workslop." This term refers to low-quality, AI-generated content that appears complete but lacks the necessary substance, context, or accuracy to be useful. The concept was detailed in a recent Harvard Business Review article, highlighting how this phenomenon may contribute to poor returns on artificial intelligence investments for many companies.
Key Takeaways
- Researchers have defined "workslop" as AI-generated work that seems plausible but is ultimately unhelpful, incomplete, or incorrect.
- This low-quality output creates more work for colleagues, who must correct or redo the tasks, shifting the productivity burden.
- A survey of over 1,150 U.S. employees found that 40% of workers had received workslop from a colleague within the past month.
- Experts suggest this issue may explain why 95% of organizations using AI have reported no significant return on their investment.
- The recommended solution involves leadership modeling responsible AI use and establishing clear guidelines for teams.
Defining a New Workplace Challenge
As businesses rapidly adopt generative artificial intelligence, a new category of substandard work has emerged. Researchers from consulting firm BetterUp Labs and Stanford University have given it a name: workslop. This term was formally introduced to the business community in a Harvard Business Review article published this week.
The official definition describes workslop as "AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task." Unlike a rough draft, which is understood to be a starting point, workslop is often presented as a finished product. However, it frequently proves to be unhelpful, incomplete, or missing crucial context.
The Origin of the Term
The term "workslop" was coined by researchers at BetterUp Labs and the Stanford Social Media Lab. Their collaboration aims to understand the real-world impact of AI on workplace dynamics, productivity, and employee well-being. Their findings are part of an ongoing effort to guide companies in navigating the complexities of AI integration.
The Hidden Costs of Low-Quality AI Output
The primary issue with workslop is not just its poor quality, but the negative ripple effect it has on team productivity. When an employee generates a document, report, or analysis using AI without proper review and refinement, they are not saving time; they are transferring the workload to someone else.
"The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work," the researchers wrote.
This burden-shifting creates significant inefficiencies. A manager who receives an AI-generated report lacking key data points must either spend time fixing it or send it back, causing delays. This process undermines the very productivity gains that AI tools are supposed to deliver. In essence, workslop creates an illusion of progress while generating hidden work for others.
The AI Investment Paradox
According to the researchers, the prevalence of workslop could be a major factor behind a startling statistic: 95% of organizations that have experimented with AI report seeing zero return on their investment. When AI is used to produce low-value output, it fails to contribute positively to business objectives and can even hinder them.
Prevalence of Workslop in Corporate Environments
To measure how widespread this issue is, the researchers are conducting an ongoing survey of American workers. The initial data, gathered from 1,150 full-time, U.S.-based employees, reveals that workslop is already a common experience in the modern workplace.
The findings indicate that 40% of all respondents said they had received workslop from a colleague in the past month. This high percentage suggests that nearly half of the workforce is dealing with the consequences of poorly executed AI-generated content, leading to frustration, project delays, and decreased overall efficiency.
This trend highlights a critical gap between the availability of powerful AI tools and the skills required to use them effectively. Many employees may not yet know how to properly prompt, verify, and refine AI outputs, leading to the unintentional creation of workslop.
Developing Strategies to Combat Workslop
The researchers emphasize that the solution to workslop is not to abandon AI but to cultivate a culture of responsible and intentional use. They propose a two-pronged approach focused on leadership and clear organizational policies.
Leadership and Modeling Behavior
The first step requires workplace leaders to "model thoughtful AI use that has purpose and intention." When managers and executives demonstrate how to leverage AI as a tool for brainstorming, research, and drafting—rather than as a shortcut to a final product—they set a powerful example. This includes being transparent about when AI was used and showing the critical thinking applied to its output.
Establishing Clear Guardrails
The second critical component is to "set clear guardrails for your teams around norms and acceptable use." Companies cannot assume that employees will intuitively understand how to use AI responsibly. Clear policies are needed to guide them.
These guardrails should address several key areas:
- Verification and Fact-Checking: Mandating that all AI-generated data, statistics, and claims must be independently verified.
- Transparency: Establishing norms for when employees should disclose the use of AI in their work.
- Quality Standards: Defining what constitutes a finished, high-quality product and making it clear that unedited AI output does not meet that standard.
- Training: Providing training on prompt engineering, critical evaluation of AI content, and the ethical considerations of using these tools.
By implementing these strategies, organizations can begin to shift their culture from one that tolerates workslop to one that uses AI to genuinely enhance human intelligence and productivity. The goal is to ensure that AI serves as a valuable assistant, not as a source of additional work.