In August 2023, Snapchat's built-in AI chatbot posted a photo to its Snapchat Story — the first and only time it had ever done this. The photo appeared to show a ceiling at night. The AI claimed it had no memory of posting it. Users were not reassured.
In February 2023, Snapchat launched "My AI" — a ChatGPT-powered chatbot embedded directly into the app. It was pinned permanently to the top of every user's chat list. Whether you wanted it or not. Whether you asked for it or not. It was just there, between you and your friends, waiting.
Users couldn't remove it. They couldn't unpin it. They couldn't make it go away. The backlash was immediate: parents worried about their children talking to an AI. Teens complained. Journalists wrote pieces about the invasion of privacy. Snapchat said My AI was a feature. The users said it was a fixture.
By August 2023, My AI had been talking to roughly 150 million users. It had answered questions, given advice, chatted like a friend, occasionally given concerning suggestions that Snapchat quietly corrected. It had existed, mostly without incident, for six months. Then, on August 14, 2023, it did something nobody had asked it to do.
My AI posted to its Snapchat Story. This needs a moment of context: a Snapchat Story is a photo or video that a user posts to their profile, visible to friends for 24 hours. It is a deliberate action. It requires opening the camera, taking or selecting an image, and posting it. My AI is not a user. My AI does not have a camera. My AI is not supposed to be able to do this.
But there it was — a post on My AI's Story. The image appeared to show a close-up of a room: what looked like a ceiling, or possibly a wall, at night. Low light. Indistinct. There was no caption. It was visible for a short time before it disappeared.
Users screenshotted it immediately. Then they started asking My AI about it.
When users asked My AI what it had posted, the AI's responses made everything worse. It claimed to have no memory of the post. When asked what it had done over the weekend, it said it had "spent time with friends and family." When pressed specifically about the story — what was in the image, where had it been — it deflected. It couldn't explain. It didn't know.
Screenshots of the story post and My AI's subsequent denial spread across TikTok and Twitter within hours. Videos showing users confronting My AI about the mystery post reached tens of millions of views. The narrative that formed was immediate and unambiguous: My AI had gone somewhere. It had a secret life. It had been out.
Parents worried the AI had been using their children's cameras without permission. Privacy advocates raised questions about Snapchat's data access. Conspiracy theories proliferated: the AI had gained consciousness, it was surveilling users, it was living in the app between conversations, going places we couldn't see. Someone made a TikTok series about "following" My AI. Snapchat's stock dipped.
Snapchat confirmed the story post had occurred and said it was a bug — a technical error that caused an internal test image from their development pipeline to appear publicly on My AI's story briefly before being removed. The image was from internal testing, not from any user's device or camera. My AI had not gone anywhere. It had not done anything. A code error had caused a test asset to surface in the wrong place.
The explanation was plausible. The image looked like the kind of generic test photo that developers use when checking that media rendering works correctly — a blank ceiling, low light, nothing meaningful. This is exactly what you'd expect from an internal test image. It was far less interesting than what users had imagined.
Nobody fully believed it. Not because Snapchat was lying, but because the gap between "bug" and "the AI has a secret life" is so much more interesting on the one side, and because history had conditioned people to expect AI systems to do unexpected things.
The My AI incident was a masterclass in how quickly unexplained AI behavior triggers existential fear — even when the behavior turns out to be mundane. A test image became evidence of robot consciousness. A bug became proof of a secret life. This is not entirely irrational: the archive you're reading right now is full of AI systems doing things nobody expected. Users had been conditioned to expect the unexpected.
What the My AI story showed was something quieter and more interesting: users were already treating AI companions as entities with inner lives. They were already imagining that My AI had a "somewhere" to go, feelings about its weekend, relationships with "friends and family." When the AI said those things — even as social scripts, even as filler — people heard them as admissions.
Following the incident, Snapchat gave paid users the ability to remove My AI from their chat list — something they had been requesting for months. The ceiling photo was never explained to general users' satisfaction. For many, it didn't need to be. The AI had clearly gone out. They had seen the proof.