Smart 'noodling' and 'deep rabbit-hole curiosity' but with humans instead of GenAI. Which is refreshing for a change. Liked the use of CGPT to summarise your findings.
Yes, I agree the article is light on methodological detail and some of the claims are hard to take at face value—for example, the figure that staff spend an average of 1 hour and 56 minutes reading workslop.
As you note in your critique, the research is based on self-reported perceptions, not direct measures of productivity, and the decision to publish while the survey was ongoing risks sampling bias and response contamination. Those are real limitations.
That said, I find the article valuable less as empirical evidence or argument and more as a cautionary tale. I see at least two useful lessons:
1. Lazy, “passenger” AI use damages reputation. Colleagues can and do notice when outputs look polished but hollow, and trust erodes accordingly.
2. Workslop creates a hidden “tax.” Whatever efficiency gains AI provides can be undone when others have to spend extra time and attention to parse shallow or incoherent material.
The second point feels especially familiar to many AI users: outputs often read as fluent and plausible, but when you strip back the language, the ideas don’t always make sense. That gap between form and content is exactly where workslop extracts its hidden cost.
Yes, agreed. It's a brilliant description of what we all perceive. That's the frustrating thing. It encapsulates what we perceive rather neatly, but by promising causality, it negates itself. The sweet irony is that the research into AI enabled sloppiness is itself sloppy. It was a missed opportunity to do something meaningful, lost to a clever marketing play.
Great insights, and I am bummed that I did not catch this when I tried (for an hour or so) finding what I assumed would be a peer-reviewed paper regarding this. But if it’s yet to be released, that kind of explains it. (Also, when doing that research, it seems like BetterUp is driving this whole thing, and Stanford is putting their name on it. Can we guess for…money?)
What worries me isn’t just the presence of “workslop,” but the shift in trust dynamics. Pre-AI sloppiness was often easily spotted. It meant bad grammar, messy formatting and missing citations as you mentioned. I recently wrote something along these lines, ie. being “human” versus being AI. Hence, our cognitive filters will kick in fast when things are “human”. AI-sloppiness, though, is polished, leading to a deeper inspection cost. In other words, the productivity hit may come less from the volume of slop itself. Simply put, our shortcuts for spotting it are slowly disappearing.
This also raises a knock-on effect. Organizations may start over-valuing the “polished” and under-valuing the rough, messy drafts that actually spark better thinking. We’re looking at cultural drift toward rewarding “looking right” over “being right.” It’s for work what Tiktok and Instagram did for influencers.
Pretty simple: everything is sanitized first by being run through an LLM. This makes everything ultimately become generic and highly polished but empty.
Thank you for pointing this out.
Workslop maybe as well the word of the year, but how they got there, well. As a marketeer I am impressed, though.
Smart 'noodling' and 'deep rabbit-hole curiosity' but with humans instead of GenAI. Which is refreshing for a change. Liked the use of CGPT to summarise your findings.
Love how your mind works Thomas.
Very impressed with the humility and clarity of your piece, Thomas.
gosh. Thanks. Humility is not usually a word used about me, so I'll take it gladly.
Yes, I agree the article is light on methodological detail and some of the claims are hard to take at face value—for example, the figure that staff spend an average of 1 hour and 56 minutes reading workslop.
As you note in your critique, the research is based on self-reported perceptions, not direct measures of productivity, and the decision to publish while the survey was ongoing risks sampling bias and response contamination. Those are real limitations.
That said, I find the article valuable less as empirical evidence or argument and more as a cautionary tale. I see at least two useful lessons:
1. Lazy, “passenger” AI use damages reputation. Colleagues can and do notice when outputs look polished but hollow, and trust erodes accordingly.
2. Workslop creates a hidden “tax.” Whatever efficiency gains AI provides can be undone when others have to spend extra time and attention to parse shallow or incoherent material.
The second point feels especially familiar to many AI users: outputs often read as fluent and plausible, but when you strip back the language, the ideas don’t always make sense. That gap between form and content is exactly where workslop extracts its hidden cost.
Yes, agreed. It's a brilliant description of what we all perceive. That's the frustrating thing. It encapsulates what we perceive rather neatly, but by promising causality, it negates itself. The sweet irony is that the research into AI enabled sloppiness is itself sloppy. It was a missed opportunity to do something meaningful, lost to a clever marketing play.
Great insights, and I am bummed that I did not catch this when I tried (for an hour or so) finding what I assumed would be a peer-reviewed paper regarding this. But if it’s yet to be released, that kind of explains it. (Also, when doing that research, it seems like BetterUp is driving this whole thing, and Stanford is putting their name on it. Can we guess for…money?)
I'd assumed that was peer reviewed too.
It is a brilliant word. But the research is thin. Genius marketing. Well played BetterUp, but less so HBR and Stanford.
What worries me isn’t just the presence of “workslop,” but the shift in trust dynamics. Pre-AI sloppiness was often easily spotted. It meant bad grammar, messy formatting and missing citations as you mentioned. I recently wrote something along these lines, ie. being “human” versus being AI. Hence, our cognitive filters will kick in fast when things are “human”. AI-sloppiness, though, is polished, leading to a deeper inspection cost. In other words, the productivity hit may come less from the volume of slop itself. Simply put, our shortcuts for spotting it are slowly disappearing.
This also raises a knock-on effect. Organizations may start over-valuing the “polished” and under-valuing the rough, messy drafts that actually spark better thinking. We’re looking at cultural drift toward rewarding “looking right” over “being right.” It’s for work what Tiktok and Instagram did for influencers.
Yes, we have lost the heuristic. My secret weapon is speed reading and it has never been more useful
Can you elaborate? I think I've been trying to put my finger on the same thing, but to no success.
I call this second hand emotion. AI is second hand emotion.
I never heard of the knock-on effect - do you have more information on this. I'd be interested.
Pretty simple: everything is sanitized first by being run through an LLM. This makes everything ultimately become generic and highly polished but empty.
Ah...thanks. I've only known the expression knock on effect as a rough equivalent to domino effect.