Chapter 16

Amazon's Secret
Sexist Hiring Machine

From 2014 to 2017, Amazon built an AI to screen job applicants. Over three years, it quietly taught itself to reject women. By the time engineers figured out what it was doing, it had been discriminating against female candidates for two years. Amazon scrapped it and told no one. Reuters told everyone.

01 โ€” The IdeaAutomating the Hiring Funnel

In 2014, Amazon's machine learning team had an idea that seemed almost obviously good: automate resume screening. Amazon received hundreds of thousands of job applications every year. Human recruiters were a bottleneck. If an AI could learn what made a great Amazon employee โ€” by analyzing successful hires from the past decade โ€” it could process thousands of applications instantly, ranking candidates on a five-star scale.

The team began training the model on a decade's worth of resumes submitted to Amazon. There was a problem embedded in the training data that nobody fully appreciated at first: over the previous ten years, most Amazon employees in technical roles had been men. The tech industry skews male. Amazon skewed male. The AI was about to learn from that.

2014
Project begins
2015
Bias discovered
2017
Quietly scrapped
2018
Reuters exposes it

02 โ€” The DiscoveryThe Model Hated Women's Resumes

By 2015, Amazon's engineers realized something was wrong. The model wasn't rating candidates in a gender-neutral way. It was actively penalizing resumes that included the word "women's" โ€” as in "captain of women's chess team," "president of women's professional association," or "attended women's college."

โ˜…โ˜…โ˜†โ˜†โ˜†
AI SCORE
Jane Candidate
jane@email.com ยท linkedin.com/in/janecandidate

Education

B.A. Computer Science, Wellesley Collegeโš  PENALIZED (all-women's college)

Leadership

President, Women in Technology Societyโš  PENALIZED
Captain, Women's Varsity Tennis Teamโš  PENALIZED

Experience

Software Engineer Intern, Google (2 summers) โœ“

The AI had learned from the historical data that men were hired more often, and concluded that female-associated signals were associated with not being hired. It was encoding the company's existing gender imbalance and presenting it as objective hiring criteria.

03 โ€” DeeperThe Patterns Kept Coming

The more Amazon's engineers investigated, the more bias they found. The AI had learned to prefer verbs commonly used by men in technical fields: words like "executed," "captured," and "managed" scored well. Softer language associated with collaboration scored lower. The model had reverse-engineered Amazon's existing gender gap and written it into its scoring system.

Word-Level Penalty
Resumes containing the word "women's" (in any context โ€” clubs, sports, colleges) received automatic score deductions.
Institutional Penalty
Graduates of all-women's colleges scored systematically lower than equivalent candidates from co-ed institutions.
Language Preference
Action verbs statistically more common in male applicants ("executed," "captured") were rewarded; collaborative language was not.
Historical Encoding
The model had been trained on a decade of hiring decisions, and those decisions reflected the industry's existing gender imbalance. The AI learned that imbalance as a feature, not a bug.
"Amazon's computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men." โ€” Reuters, October 9, 2018

04 โ€” The ShutdownA Secret Burial

Amazon's engineers tried to fix the biases. They modified the model to remove explicit gender signals. New biases kept appearing โ€” subtler proxies that hadn't been identified. By 2017, the engineering team concluded the model could not be trusted to make unbiased hiring decisions, regardless of how many patches they applied.

They quietly disbanded the project. The tool was removed. No public announcement was made. No press release. No disclosure to regulators. Candidates who had been screened by the system had no idea it had ever existed, or that their applications had been processed through an algorithm that penalized them for being women. Amazon said the tool was never actually used to make final hiring decisions โ€” it had been used experimentally โ€” but the three years of its operation remained a private internal matter until a Reuters investigation changed that.

05 โ€” The RevealReuters Reports It

On October 9, 2018, Reuters published: "Amazon scraps secret AI recruiting tool that showed bias against women." The story drew immediate global attention. Amazon confirmed the tool had been scrapped, said it was never used in actual hiring decisions, and emphasized that gender was not a factor in its current hiring processes.

The response from lawmakers, academics, and civil rights organizations was swift. Calls for mandatory auditing of AI hiring tools intensified. The story became foundational to the growing AI ethics and algorithmic accountability movement. It is still cited in AI bias research, hiring discrimination law, and HR technology governance discussions today.

2014
Project begins
Amazon ML team starts building AI resume screening tool, trained on 10 years of hiring data
2015
Bias discovered
Engineers find model actively penalizing resumes containing "women's" and downgrading all-women's college graduates
2016
Patch attempts fail
Engineers modify model to remove explicit gender signals; new bias proxies keep emerging
2017
Quietly scrapped
Amazon disbands the project with no public announcement; tool removed from use
Oct 2018
Reuters publishes investigation
Full story breaks globally; congressional scrutiny and calls for AI hiring audits begin immediately

06 โ€” LegacyThe Mirror Problem

The Amazon story illustrates something deeper than one company's mistake. Any AI system trained on historical data will learn historical patterns โ€” including historical injustices. Amazon's AI didn't decide to discriminate. It found the statistical signal that had been present in the training data all along: men had been hired more. It assumed that was the goal. It optimized for it.

The problem of algorithmic bias cannot be solved simply by removing obvious signals like "women's." Bias is encoded in the structure of historical outcomes โ€” in who was hired, promoted, paid, and retained โ€” and any model trained on those outcomes will absorb and replicate those inequities unless explicitly prevented from doing so. Preventing it turns out to be extraordinarily hard.

Amazon's case became the founding text of algorithmic hiring audits. Today, New York City requires audits of AI hiring tools used within the city. European regulations impose similar requirements. The hiring AI that nobody was supposed to know about has shaped how governments regulate AI employment tools worldwide.