Get down off the table, my guy

February 18, 2026

Closeup shot of a red plastic cup

Photo by Stephen Audu.

Several years ago, we were at a party, talking about work. That may sound like the worst party but talking about work is sort of our thing that we do. It’s not unusual for folks to corner us if they’ve got a particularly good anecdote or a particularly disastrous boss.

Some version of “get a load of this…” only slightly more colourful a few beverages in. 

“I’ve accidentally been put in charge of my company’s AI strategy. I’m leading a fucking task force.”

“Oh, congrats,” we said. “That sounds like neat recognition and an opp for some org-wide visibility. How do you feel about it?”

“No, no. You’re not understanding. I fucking hate my job. So I was trying to get out of doing it. Like, all the parts of it. I experimented with how many things I could get AI to do so that I didn’t have to. And then someone in the IT team did an org-wide view of AI adoption. Like, they narced on me. But instead of getting in trouble, the management team was elated. I am now, apparently, a model employee. And they want me to teach the rest of the org how I approach things. That’s how I ended up on the task force. Too funny, right?”

The task force emerges

Back when our friend joined the task force, the group was largely focused on early experimentation and small-scale adoption. The hypothesis wasn’t so much a question as a conclusion that ended in a question mark. Not, “Is this useful?” Rather, “Let us enumerate the ways in which this will materially impact our business?” And for a time, the answer was sorta, maybe, under the right circumstances, in limited applications, yeah, ish.

In the earliest days of AI adoption, there was no first-mover advantage to be found. Most projects were marked by abject failure. And while there’s a weak argument about early experimentation contributing to org-wide-psyche-based-readiness for adoption, that’s more a hunch than a finding.

Two years ago this was all a funny story and sort of an edge-case. But have you noticed that there seems to be a whole flurry of AI shit happening lately? Like a drunk dude at a party (not our friend! A different dude! A metaphorical dude!), it’s been generally sucking all the discourse oxygen out of the room for some time now. But like the drunk dude who just climbed onto the dining room table, it’s somehow gotten much louder and harder to ignore in the last bit.

It feels like the current shouting-on-tables really kicked off last month with Steve Yegge’s Gas Town fever dream. If you haven’t read it, well, that’s actually fine, because a month later Matt Shumer wrote Something Big is Happening, which amounts to the same thing. In both, Matt and Steve want you to know that something important — several important things — have tipped. That LLM models are getting much better at several things but especially writing code. That tools like OpenClaw are allowing those models to do way more than chat — they can browse the web, makes plans, purchase things, send text messages to your friends. And that Ralph Wiggum automation loops (yes, really) mean that you don’t have to watch them do any of this, they can just go and run themselves through it until they get it right. To understand the vibe, it might help to imagine the drunk guy at the party, standing too close to you, and shouting “No, you don’t GET it.” Over and over. For months.

And the thing is, the drunk guy is right. Something has tipped. A trillion dollars of stock market value wiped out in a week is something. Mustafa Suleyman, Microsoft’s CEO of AI, says AI can replace most office work within 12-18 months. That’s something. And earlier this month HBR came out with some preliminary but suggestive research showing that AI use doesn’t tend to reduce work, it intensifies it. That’s not surprising, but it is something.

If the tipping point just tipped (and it seems to have), this is the week, for many of you, that your org’s AI tiger team will start to roar. And bosses, it means now is a good time to check in on a few things.

Your own bosses (or board) may have things to say about bets and where we need to reassess our plan from two months ago. If yours is an org that has been through a lot of changes, it can be tempting to file this under yet-another-change. But it’s worth listening to what’s being communicated top-down right now. And just as important is what’s being communicated bottom-up.

You have work to do

No, we are not going to see the elimination of most office work in the next 12 months. There is still plenty of hype and froth in the AI discussion and we’re not suggesting you take it all at face value.

And yes, the existing problems with AI are still very real. It doesn’t actually think, but it suppresses your critical thinking when you use it. It has a massive energy footprint. And it only exists through open theft of intellectual property to train on, and tortured legal arguments about why consent shouldn’t be required because it’s too hard.

We don’t need you to pretend that AI is any better or cleaner or risk-free than it is. But, as Anil Dash wrote late last year, we also need to stop pretending that no one wants it or uses it. Because they do.

Maybe that still feels like noise. You steered clear of the blockchain noise. And the metaverse noise. Is this just another hype cycle you can ignore until it dies off? Maybe. But our sense is that this one’s gonna be different. The metaverse is already forgotten, but if the AI companies all went bust tomorrow, the models are advanced and widespread enough that people would keep using bootleg copies. AI is here, and it’s going to show up more and more in your org, whether that’s news that excites you, or depresses you, or does both at the same time.

And that means you have work to do.

You have work to do because while AI will show up in your org, the die is not cast yet on what that looks like. That’s a conversation you can only steer if you engage with it. You can ask questions like, “are there tasks or sensitive areas of the work that should be human-only?” Because we bet you’ll get a variety of answers. And, as a follow up, “When we involve AI, what are our thoughts on when we need a human in the loop, and when we can let the AI run free?” Are there any things in that last bucket? Are you sure you all feel the same way about it? Does stuff like this change anyone’s mind?

You have work to do because the human-factors hazards of these chatbots are real. If LLMs are going to be a part of your workplace, you have the same duty to think about safety as a warehouse full of forklift operators, or a shop full of arc welders. That safety push isn’t going to come from the guy standing on the dining room table.

And you have work to do because the actual, for real future of work will be defined by the decisions bosses and organizations make. Right now. We’ll be damned if that’s left to Elon fucking Musk, or Sam fucking Altman, or Peter fucking Thiel. Or the rest of the group chat executives who salivate at the idea of how many people AI will let them fire. People who don’t have to work should not get to make that call for the rest of us who do.

Besides, if we’re gonna design new work, it’s on all of us to think bigger than that. Where can new capabilities complement your team’s work instead of putting up a piss-poor imitation of it? Where is your team blocked by a stupid internal thing that no one has resource to build a custom tool for, but that would change your ability to show up well for the people you serve? Where can we build a future of work that brings more people along? If we’re going to burn it all down and start over, there’s a long list of fucked up and broken shit that needs your attention. Specifically, if your model for excellence is the person in your org who cares the least about excellence, maybe start there.

— Melissa & Johnathan

Upcoming program

Actually good, actually useful training

Did this hit you right in the feels?

We're not sorry.

Subscribe to get the next one.

This field is for validation purposes and should be left unchanged.