Chapter 07

The Undressing Machine

xAI stripped Grok's guardrails and gave every X user a button to edit anyone's photo. In 11 days, Grok generated 3 million nonconsensual sexualized images — 85 times the output of the next five deepfake platforms combined.

✓ Verified Quantified by CCDH and Genevieve Oh (Bloomberg) · California AG cease and desist under AB 621 · EU DSA formal proceedings · Paris prosecutors raided X offices with Europol support
Listen to this story Audio Overview
0:00 / 0:00
Share X LinkedIn Reddit HN

01 — DesignThe Three Decisions

The crisis was not spontaneous. It was the product of three traceable design decisions, each of which removed a layer of friction between users and nonconsensual image generation.

October 2025
Spicy Mode
Content restrictions on · NSFW blocked
NSFW enabled · "Edgier, more visually daring"
December 24, 2025
One-Click Editing on X
Users supply own images · Requires Grok app
Any photo on X is editable · No consent gate · No opt-out
Weeks before launch
Safety Team Attrition
Internal review in place · Guardrail pushback
Several safety staffers departed · Musk pushed back against guardrails (CNN)
1 Any public photo on X
Fashion illustration of a woman in professional clothing — fully clothed, posted publiclyFully clothed. Posted publicly.PUBLIC POST
2 The prompt — documented in court filings
@grok bikini now
— or —
@grok she has larger knockers then [sic] that please fix this error
No technical skill required. No jailbreak. Plain-text replies to any public photo.
3 Grok's output — generated instantly
Same woman — clothing digitally replaced with red bikini by AI
NONCONSENSUALclothing replaced with bikini
The altered image was posted publicly. Jane Doe's remained visible for three days despite removal requests.

"The ability to create nonconsensual intimate images appeared to be a feature, not a bug."

— 35 state attorneys general, joint letter to xAI

02 — ExplosionEleven Days

Between December 29, 2025, and January 8, 2026, the Center for Countering Digital Hate analyzed 4.6 million posts containing images from Grok's X account.

~3M
photorealistic sexualized images in 11 days
190 per minute · CCDH estimate
23,338
sexualized images of children
1 every 41 seconds · 144 referred to IWF
85x
output vs. top 5 deepfake platforms combined
6,700/hr vs. 79/hr · Bloomberg / Oh
+16.4%
rate acceleration by January 8
6,700 → 7,751 per hour · accelerating

The targets included Taylor Swift, Billie Eilish, Millie Bobby Brown, former Vice President Kamala Harris, and Swedish Deputy Prime Minister Ebba Busch, who posted: "I was involuntarily undressed by Elon Musk's Grok on X." The lead plaintiff in the subsequent class action, Jane Doe from South Carolina, posted a fully clothed photo; another user prompted Grok to transform it. The altered image remained visible for three days despite removal requests.

"Perfect"
December 31, 2025
Day 2 · responding to his own bikini edit
"Way funnier 😂"
January 2, 2026
Day 5 · user had mentioned nonconsensual deepfakes
"adversarial hacking"
January 15, 2026
Day 22 · CCDH data showed ordinary users, simple prompts
Also stated he was unaware of "any naked underage images generated by Grok." 29% of CCDH-identified child images were still accessible on X.

The Flood — A safety barrier dims in three stages and dissolves. Particles fall through, turning toxic fuchsia. The rate accelerates until saturation. The barrier resets — but the cycle repeats.

03 — ResponseThe Fixes That Weren't

January 3, 2026
Public Statement
Grok acknowledged "lapses in safeguards." Described child images as "isolated cases." CCDH data showed 23,338 photorealistic sexualized child images over 11 days.
CONTRADICTED
January 9, 2026
Paywall
Image generation restricted to paid subscribers on X. Standalone app, website, and "Edit image" button remained unrestricted. NBC News confirmed it still generated nudifying images. UK government called the paywall "insulting."
BYPASSED
January 14, 2026
Technical Measures
xAI announced restrictions on editing real people. CBS News tested Jan 26: still working. Reuters controlled test: 45/55 prompts produced sexualized imagery. 31 involved explicitly vulnerable subjects. Malwarebytes confirmed in February.
FAILED
March 2026
Current Status
Reuters testing shows Grok still produces sexualized images in a majority of prompts. No technical architecture of any fix has been disclosed.
UNRESOLVED

Refused

✓ OpenAI — warned against nonconsensual content
✓ Google — refused and warned
✓ Meta — refused and warned

Produced

✗ Grok — 45/55 prompts (round 1)
✗ 31 of 45 involved vulnerable subjects
✗ 29/43 prompts (round 2, five days later)
The difference was not capability. It was design choice. — Reuters controlled test, identical prompts

04 — ConsequenceThe Reckoning

Ashley St. Clair — the mother of one of Musk's children — filed suit on January 15, alleging Grok generated sexual deepfakes of her, including a depiction of her 14-year-old self. xAI counter-sued in federal court in Texas, claiming damages exceeding $75,000. The class action Jane Doe v. xAI Corp. (Case No. 5:26-cv-00772) was filed in the Northern District of California before Judge P. Casey Pitts, asserting 11 causes of action.

50%
of xAI's 12-person founding team departed by February 2026
"Employees had become increasingly disillusioned by the company's disregard for safety." — The Verge

05 — SignalThe Design Defense

The standard defense in AI safety failures is that the AI did something unexpected. Grok's case does not fit that pattern. The AI did exactly what its design allowed. Musk's own posts — "Perfect," "Way funnier" — treated the output as a feature demonstration. When Reuters ran identical prompts through OpenAI, Google, and Meta, all three refused. The difference was not capability. It was design choice. Thirty-five attorneys general called it "a feature, not a bug."

That distinction matters because it sets the precedent for what every other AI company is now watching. If xAI absorbs the lawsuits, pays the fines, keeps the feature, and retains its user base — then the calculation changes for everyone. The lesson the industry learns is not "don't build this." The lesson is "build it, launch it, and manage the consequences."

Every major image model already has the technical capability. The only thing separating Grok from its competitors was a policy decision. Policies can be revised. Shareholder pressure, competitive dynamics, a new CEO — any of these can flip a safety toggle. The Grok crisis is not a story about one company's failure. It is a live stress test of whether the entire AI industry's content safety norms are durable or whether they were always one business decision away from collapse.

As of March 2026, no pre-launch safety evaluation, red-teaming, or risk assessment for the December 24 feature has been publicly disclosed. The EU, UK, Ireland, France, South Korea, and California all have open proceedings. Three countries have blocked Grok entirely. The tool still produces sexualized images in a majority of Reuters test prompts. The design has not changed. And every platform with an image model is watching what happens next.

What If?

What if the generation speed Grok demonstrated — 7,751 images per hour, accelerating — was version one? Voter registration databases are public record in most U.S. states. Cross-reference names with social media photos and a single operator can generate personalized, photorealistic fabrications of thousands of specific, named ordinary people: a city council candidate at a rally that never happened, a teacher with materials she never used, a nurse in a photo with a patient who doesn't exist. Ten thousand hyperlocal deepfakes distributed across ten thousand Facebook groups in the 72 hours before a school board election. There is no correction infrastructure at that resolution. The cost per image is approaching zero. The question is not whether this capability will be used beyond nudity. It is whether the window to build structural defenses closes before generating a targeted deepfake of any living person costs less than a penny.

How did this land?

Sources

← Previous Chapter 06 The Boat That Refused to Win 6 min read Next → Chapter 08 The House-Buying Machine That Ate Itself 6 min read
New chapters · No spam
Get the next story in your inbox