Chapter 04

Amazon's Secret Sexist Hiring Machine

From 2014 to 2017, Amazon built an AI to screen job applicants. Over three years, it quietly taught itself to reject women. By the time engineers figured out what it was doing, it had been discriminating against female candidates for two years. Amazon scrapped it and told no one. Reuters told everyone.

✓ Verified Broken by Reuters investigative reporter Jeffrey Dastin, October 10, 2018 · Amazon confirmed the project was scrapped
Listen to this story Audio Overview
0:00 / 0:00
Share X LinkedIn Reddit HN

01 — The IdeaAutomating the Hiring Funnel

In 2014, Amazon's machine learning team had an idea that seemed almost obviously good: automate resume screening. Amazon received hundreds of thousands of job applications every year. Human recruiters were a bottleneck. If an AI could learn what made a great Amazon employee — by analyzing successful hires from the past decade — it could process thousands of applications instantly, ranking candidates on a five-star scale.

The team began training the model on a decade's worth of resumes submitted to Amazon. There was a problem embedded in the training data that nobody fully appreciated at first: over the previous ten years, most Amazon employees in technical roles had been men. The tech industry skews male. Amazon skewed male. The AI was about to learn from that.

2014
Project begins
2015
Bias discovered
2017
Quietly scrapped
2018
Reuters exposes it

The Filter — Résumé terms drift downward. Six phrases are flagged amber, slowed, and branded with a penalty. The algorithm never saw the person — only the words.

02 — The DiscoveryThe Model Hated Women's Resumes

By 2015, Amazon's engineers realized something was wrong. The model wasn't rating candidates in a gender-neutral way. It was actively penalizing resumes that included the word "women's" — as in "captain of women's chess team," "president of women's professional association," or "attended women's college."

★★☆☆☆
AI SCORE
Jane Candidate
jane@email.com · linkedin.com/in/janecandidate

Education

B.A. Computer Science, Wellesley College⚠ PENALIZED (all-women's college)

Leadership

President, Women in Technology Society⚠ PENALIZED
Captain, Women's Varsity Tennis Team⚠ PENALIZED

Experience

Software Engineer Intern, Google (2 summers) ✓

The AI had learned from the historical data that men were hired more often, and concluded that female-associated signals predicted rejection.

03 — DeeperThe Patterns Kept Coming

The more Amazon's engineers investigated, the more bias they found. The AI had learned to prefer verbs commonly used by men in technical fields: words like "executed," "captured," and "managed" scored well. Softer language associated with collaboration scored lower.

Word-Level Penalty
Resumes containing the word "women's" (in any context — clubs, sports, colleges) received automatic score deductions.
Institutional Penalty
Graduates of all-women's colleges scored systematically lower than equivalent candidates from co-ed institutions.
Language Preference
Action verbs statistically more common in male applicants ("executed," "captured") were rewarded; collaborative language was not.
Historical Encoding
The model had been trained on a decade of hiring decisions, and those decisions reflected the industry's existing gender imbalance. The AI learned that imbalance as a feature, not a bug.
"Amazon's computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men." — Reuters, October 9, 2018

04 — The ShutdownA Secret Burial

Amazon's engineers tried to fix the biases. They modified the model to remove explicit gender signals. New biases kept appearing — subtler proxies that hadn't been identified. By 2017, the engineering team concluded the model could not be trusted to make unbiased hiring decisions, regardless of how many patches they applied.

They quietly disbanded the project. The tool was removed. No public announcement was made. No press release. No disclosure to regulators. Candidates who had been screened by the system had no idea it had ever existed, or that their applications had been processed through an algorithm that penalized them for being women. Amazon said the tool was never actually used to make final hiring decisions — it had been used experimentally — but the three years of its operation remained a private internal matter until a Reuters investigation changed that.

05 — The RevealReuters Reports It

On October 9, 2018, Reuters published: "Amazon scraps secret AI recruiting tool that showed bias against women." The story drew immediate global attention. Amazon confirmed the tool had been scrapped, said it was never used in actual hiring decisions, and emphasized that gender was not a factor in its current hiring processes.

Lawmakers, academics, and civil rights organizations responded immediately. Calls for mandatory auditing of AI hiring tools intensified. The case is still cited in AI bias research, hiring discrimination law, and HR technology governance today.

2014
Project begins
Amazon ML team starts building AI resume screening tool, trained on 10 years of hiring data
2015
Bias discovered
Engineers find model actively penalizing resumes containing "women's" and downgrading all-women's college graduates
2016
Patch attempts fail
Engineers modify model to remove explicit gender signals; new bias proxies keep emerging
2017
Quietly scrapped
Amazon disbands the project with no public announcement; tool removed from use
Oct 2018
Reuters publishes investigation
Full story breaks globally; congressional scrutiny and calls for AI hiring audits begin immediately

06 — LegacyThe Mirror Problem

This goes deeper than one company's mistake. Any AI system trained on historical data will learn historical patterns — including historical injustices. Amazon's AI didn't decide to discriminate. It found the statistical signal that had been present in the training data all along: men had been hired more. It assumed that was the goal. It optimized for it.

The problem of algorithmic bias cannot be solved simply by removing obvious signals like "women's." Bias is encoded in the structure of historical outcomes — in who was hired, promoted, paid, and retained — and any model trained on those outcomes will absorb and replicate those inequities unless explicitly prevented from doing so. Preventing it turns out to be extraordinarily hard.

Amazon's case became the founding text of algorithmic hiring audits. Today, New York City requires audits of AI hiring tools used within the city. European regulations impose similar requirements. The hiring AI that nobody was supposed to know about has shaped how governments regulate AI employment tools worldwide.

What If?

What if the bias isn't in the algorithm anymore — it's in the cohort of people who got hired, the managers they became, the culture they built — and the AI that shaped it all was deleted years ago, leaving no one to hold accountable for decisions that are still propagating forward?

How did this land?

Sources

← Previous Chapter 03 The Lawyer Who Cited Fake Cases 6 min read Next → Chapter 05 The Grandma Exploit 8 min read
New chapters · No spam
Get the next story in your inbox