How did they not know?

July 23, 2025

Yellow and black stripes

Photo by Ash Amplifies.

It’s summer break and we’ve been watching old movies with the kids. This very wholesome family tradition benefits from nostalgia-Tok. If you aren’t familiar, please know that many of the films and music from your youth have emerged anew, in 90-second tracks. Our teen has discovered Motown and bossa nova and Tupac with all the enthusiasm of prior generations digging through record bins and none of the economic limitations. Sorta like Napster, just post-DMCA.

You already know if you’ve tried to go back and watch something from your youth. A lot of films do not hold up to the passage of time. Audiences change. Cultural norms change. Fashion goes out and comes in again. Music, the same. And CGI that was downright terrifying and cutting-edge 40 years ago is hilariously campy now.

Part of the cross-generational fun is answering questions like, “Did people actually talk like that? Did they really say totally radical?” Yes, and it was totally radical, dude.

But also:

Wait, why are they smoking inside the restaurant? That used to be allowed. It hasn’t been for a long time. Inside? At the table? Yes.

Are those ashtrays on their desks at work? People used to smoke at work? Yes. Fun fact, both parents used to work in offices with ashtrays on the desks. You did? Ewwwwwwwww.

How come there are no seatbelts? There didn’t used to be seatbelts. At all? It took a while but everyone outside of New Hampshire eventually came around.

A latent sense of danger

When enough of these questions stack up in a single film, the kids worry that everyone in the 80s was clueless. Not entirely, we explain. We just didn’t really know exactly how bad or dangerous these things were. Incredulous, the kids say, you didn’t know it was dangerous to breathe smoke. Indoors. While eating. With all the windows and doors closed. How did you not know that???

Well, Toronto used to dump raw sewage and dead animals in Lake Ontario and then have residents drink the water. And that’s how we came to have several cholera pits that are now public parks. But, we didn’t know back then. They want to quibble with this, too. Raw sewage in the drinking water is just an objectively bad idea. Why would anyone do that? How did they not know??

Well, some people suspected it wasn’t great. For smoking, or poop in the drinking water? Well, both, actually. But sometimes things can live in that middle state where we have a latent sense but there’s no law, yet. Lead paint was like that for a time. Asbestos was, and strangely may be again. There are loads of stories like this throughout history. Deadly nightshade used to be eye makeup. It sometimes takes a few generations to shake things out. And even though these things feel long ago and far away, there’s likely stuff right now that falls along those lines.

Not obviously hazardous but not entirely non-hazardous.

Not entirely non-hazardous

New technology, rolled out quickly. You can talk to it and it talks back. So long as nothing ever goes weird, this should be fine. Right? Um, well.

Two years ago, a Google engineer made headlines after declaring an AI chatbot he was interviewing sentient. Less than a week later, he no longer worked at Google.

But that was two years ago.

Today, over in CEO-club, all the cool kids are forcing their employees to use AI. They write bombastic internal emails, and then post those publicly for their CEO friends to admire and riff on. They tie AI usage to performance evaluation and compensation. They tie it to promotions and headcount requests. These CEOs, many of whom have woefully incomplete policies on hate speech and harassment, are clear-as-day that failing to use this or that LLM is an immediately fireable offence.

It’s fair to ask why one should have to compel people to use a tool if it’s so obviously incredible, but the CEOs are prepared for that question. Of course the tools are incredible, and many of their team members are already using them. But the pace of adoption needs to be faster. CEO social circles are often full of AI investors who assure them that even more powerful AI is just around the corner. And if their teams need some tough love in order to get with the program and see the vision, so be it. As Matt Damon famously said about some crypto scam or other, “Fortune favours the bold.” Who could hate boldness?

So with all of that as backdrop, the thing you might have missed is that AI use definitely seems to be inducing psychosis in some of its users. We’re still in the early days, this is still mostly anecdotes, but the anecdotes are multiplying fast. And they’re haunting. People are losing touch with reality, and being pulled into alternate ones. Not in ha-ha, silly ways. They’re being involuntarily committed, because they’re scaring their families and themselves.

Some of the stories emerging involve people having their existing illnesses worsened by chatbots. Bots that encourage harmful behaviours or delusions, support people going off their meds, feed into conspiracy theories or help them develop new ones. The abject failure of the AI companies to identify and manage those impacts is horrifying, and the people’s history of mental illness doesn’t make their suffering any more acceptable. But on top of that, it’s also leading to psychosis for people with no history of mental illness. We know folks who have experienced this. They’re not a thought experiment by anti-AI luddites, they are real people. And their exposure to LLMs has caused some very core things to go very weird.

Kettle logic

This isn’t the first time tech has created hazardous working conditions. Meta is probably the most famous for the lifelong trauma it has inflicted on outsourced content moderators from Kenya, to Ghana, to Spain, and beyond. But they’re hardly alone. And the history of these companies, and their statements, and their settlements, have taught us what to expect next. Derrida called it Kettle Logic.

They’ll say the harm doesn’t happen. They’ll say that it does happen, but it’s limited to a small (and, implicitly, lesser) few employees. They’ll say it was out of their control because it’s third-party AI. They’ll say that a causal link hasn’t been proven. They’ll say they’re sorry and they are making changes.

They’ll say they didn’t know.

Their employees, under threat of losing their jobs, will use the GPTs they’re ordered to use. And most of them will be fine, and some of them will get sick. And, like ashtrays at the office, we all already know, right now today, that that will happen.

Work is not hazard-free

Look, there are lots of hazards in the workplace. Steel mills don’t work without hazards. Neither do power plants, printing presses, dry cleaners, or dentists. The problem isn’t the existence of a hazard. Sometimes those are intrinsic to the job itself, and sometimes they’re just part of the way a given business chooses to do the job faster or cheaper or better.

The way you spot a well-run business is not necessarily that there are no hazards, but that there are thoughtful protections in place. There are safeties, and harnesses, and cross-checks, and training, and duty rotations. Exposure to the hazard is monitored and contained and minimized. There is care.

And bosses, we’re just not seeing a lot of that in the decrees around AI.

You may manage folks on your team who are working alone. At home. And you know they can reach the end of their business day without talking to another human being. And sometimes, even without the help of chatbots, things go super weird. One in five employees feels lonely at work, and managers weren’t exactly doing a brilliant job of keeping tabs on everyone before all this, either, much less now. It’s a hard gig. Believe us, we know.

With any hazard, managers can be a first line of defense. We can raise a flag. We can spot when our people are struggling. And we can work within our organizations to get them the resources and supports they need. We are huge believers in the power of skilled managers, but that cannot be your org’s whole answer to tech that talks back.

If you’re in a shop that requires your people to use chatbots as part of their job, this is a conversation you need to start. What should your obligation be as an employer? How should you structure the work with care, to keep your people safe? What resources do you need in order to navigate it as a manager? We all know the benefits you’ve been pitched, it’s time to talk risk.

When Google suspended Blake Lemoine years ago for believing that the AI had come to life, the reporting mostly treated it as an eccentric oddity. Maybe when it happens once, it’s just an interesting anecdote. More than that, though, and the next generation won’t believe that you didn’t know.

— Melissa & Johnathan

Upcoming Programs

Actually good, actually useful training

Build Something Better

Subscribe to our free, biweekly newsletter

This field is for validation purposes and should be left unchanged.