Photo by IMRAN SHEIKH.
One of the best parts of parenting a tweenager and a teenager is that you get really skilled at spotting when someone is feeding you bullshit. Not just of the all-my-friends-can-stay-out-late variety. But of the wait-a-tic-that-thing-you’re-citing-as-a-fact-can’t-possibly-be-right variety. Add to that the sandwich-generation stuff where media literacy is super hard for people who didn’t come up with photoshop and you end up with a finely tuned ear for nonsense masquerading as insight.
You may have noticed that nonsense masquerading as insight is having a real moment. So whenever anyone asserts anything, particularly as a universal truth, we take a beat. This week, those assertions have covered everything from “it’s the end of work as we know it” to “it’s the end of humanity as we know it.” For sure, we’re in a period of high, concentrated change. It would be hard to claim otherwise.
In these moments of high change, it’s natural to grasp for control. To want to know what’s coming, so you can prepare. Or be a step ahead. Or to at least feel like you’re doing everything in your power to stay on top of things. And when there’s enormous market demand for certainty, people will rush to meet it. We have a name for people who profit from offering certainty in uncertain times. We call them charlatans.
So if you haven’t been honing your bullshit-detection skills, now is a great time to start. Particularly around AI. Which is either the biggest story of concentrated change in a generation. Or ever. Depending on what you’re reading.
Charlatans beware. Here are three, quick ways to spot AI bullshit in the wild:
1. Treating speed as an unmitigated good
The opposite of fast is slow and how would anyone be pro-slow? Like, as a posture, it’s giving sloth vibes on all fronts. Gross. We prefer to hurtle into the future as fast as we possibly can. On the work front, that means rapid adoption characterized by “get on board or you’re fired” mandates. On the home front, that means take your kids out of normal school and put them in terrifying, dystopian AI school so they don’t fall behind. Again, falling behind is what a sloth would do. And outside of being cute, no one wants to be on team sloth.
Speed above all else. Generate a lot of code as quickly as you can and please do not stop to ask any questions along the way. Questions slow you down. “What are we building and why?” That’s a one-way ticket back to sloth-town.
Listen, there are lots of contexts in which speed matters. Olympians. Emergency responders. These are places where it’d be hard to argue that speed isn’t a core part of the gig. But in all of them, the emphasis on speed is counterbalanced by an emphasis on safety. For olympians, speed in relation to injury or their ability to compete at the next games. For EMS, speed in relation to pedestrians or traffic accidents on the way to the ER.
Everything needs to go faster, speed > everything, is nonsense masquerading as insight. It’s pithy as a slogan. But if you can’t find any mention of nuance or the counterbalance or safeguards for all that speed, our put to you is that you’ve found some bullshit. This is a great moment to take a beat. Ask yourself, “What is the trade-off for moving this fast? What, exactly, are we breaking?”
Which bring us nicely to…
2. Failing to grapple with the bad shit
When you’re excited about a thing, and all the good, cool stuff you imagine it doing, it’s sort of a buzzkill to have to touch on all the problems as well. It dilutes the thought-leadership. Muddies the message. Harshes the buzz, as it were. Like, surely you can talk about enjoying chocolate bars without having to dwell on tooth decay and deforestation and labour conditions? We were there in the early days of the web. We remember what techno-optimism feels like, and how it defends itself.
But often the difference between nonsense and insight is in the nuance. Even the charlatans can sense it — a lot of the people writing about AI at work have learned to include a sort of pro forma disclaimer to cover their bases. You know,
Here’s why AI is going to change everything and you need to adopt it immediately (but also of course there are major environmental, moral, and other complexities that I can’t get into here, definitely do your own research) or you will be left behind and ridiculed by every person you care about or admire.
There’s always something weirdly anti-vax about the way it’s written, too. Where the LinkedInfluencer set definitely wants the clout of being the one telling you what to do, but definitely doesn’t want the responsibility of being the one telling you what to do. So instead they do this little two-step where if you don’t follow their advice you’re out of touch, but if you follow it and get burned, it’s your own fault.
The bosses we work with hear it from us all the time: good management is in the grappling. If someone presumes to tell you how to run your business, and what role AI might play in it, they better have the range to actually grapple with this stuff. Whatever they’re pushing — AI for code, AI for customer support, AI for HR, AI for management — do they show evidence of thinking through the consequences? We know that heavy chatbot use leads to burnout, and social atrophy, and delusions, and a growing list of human tragedies. We know that generative AI will produce CSAM, deepfaked non-consensual porn, and instructions for nuclear weapons if you, you know, ask it in the form of a poem. We know that autonomous agents will loop themselves into vendettas and use whatever capabilities they have to act on those.
Should you really put that in front of your customers, clients, and community? Is it right to pressure your employees to use more and more and more of it, to give it more and more access, just to see what happens? It’s possible to talk about those risks thoughtfully, to offer guidance and safeguards where you need to, and even to conclude that this or that application is appropriate and well-mitigated. But most of the AI froth we’re seeing shows no sign of having even thought about it.
Oh hey and speaking of thinking…
3. Using bad mental models and lazy analogies
If there’s a thing the Thought-Leader Industrial Complex gets right about AI, it’s that we’re working with something new. A competent architect can look at blueprints and tell you what the building will look like. A software engineer can read some code and give you a pretty good sense of how it works and what it will do. (Like. Ish.) AI isn’t regular code though, and reading the weights of a 100-billion parameter model doesn’t tell you anything helpful. Understanding the way pre-training interacts with fine-tuning, reinforcement/alignment cycles, reasoning models and query routing is hard. It’s complicated. We are in the earliest stages of it.
And a lot of people would rather just shortcut all that. So they talk about what Claude thinks, and what it wants. They say AIs are like interns and that running autonomous agents is like management. We are a narrative species and analogies are often really helpful paths to understanding, but holy shitballs is this some hot garbage. Can we take a minute here?
First of all, we totally get the temptation to say that the chatbots are “thinking.” It’s baked into their UIs, for one. And given that humans see faces in french toast and rock formations and bathtub faucets, of course we’re going to impute thought into a thing that answers our questions in well-constructed paragraphs. But when your human colleague thinks about a question, we implicitly assume a set of things about that thought, things we rely on: they value internal consistency, they notice when something doesn’t make sense, they have some degree of personal judgement, they understand the consequences of being wrong, and they have some stake in the outcome. AI does none of that, can radically fail at all of that, and its failures will be silent.
By extension, no, running bots is not management, and only a shit manager would suggest that it is. People who don’t think very hard about management often seem to believe it’s a 50/50 mix of assigning tasks and approving vacation. But management is a specifically human practice that seeks to get the best from a group of people by aligning their individual talents, motivations, and creativity with clearly communicated goals and opportunities. Good management centres the humanity of each employee, their psychological safety, their ambitions, their struggles and their growth within a context that protects their rights and treats them fairly.
Bots have no rights. The people so desperate to replace their people with bots know that — it’s the central point of the charlatan sell. Calling it management is an insult and a threat — not just to your managers, but to every human in your organization.
Whatever stage of history comes to mind for you when you imagine people with power pushing for work without safeguards or rights, we bet it isn’t pretty. We shouldn’t recreate it.
The best of ingenuity and advancement happens when we grapple. When we ask questions. Sit in the nuance. And poke at the implications. If anyone tells you those things are undercutting ingenuity or blocking advancement, congratulations. You found yourself a charlatan.
Be curious. Be skeptical. In equal measure. That is how you spot the bullshit.
— Melissa & Johnathan