Amazon Fired Its Resume-Reading AI for Sexism


Now abandoned, the project shows the risks of training artificial intelligence on biased data.

Algorithms are often pitched as being superior to human judgement, taking the guesswork out of decisions ranging from driving to writing an email. But they’re still programmed by humans and trained on the data that humans create, which means they are tied to us for better or worse. Amazon found this out the hard way when the company’s AI recruitment software, trained to review job applications, turned out to discriminate against women applicants.

In place since 2014, the software was built to find the top talent by digging through mountains of applications. The AI would rate applicants on a scale of 1 to 5 stars, like you might rate a product on Amazon.

“Everyone wanted this holy grail,” a person involved with the algorithm tells Reuters. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

The model was trained to look at Amazon hiring patterns for software developer jobs and technical position over the last decade. While on the surface this makes sense—in the last 10 years Amazon has grown tremendously, a good sign that it has hired the right people—in practice it only reproduced the sexist biases already in place. Most of the hires over the last 10 years had, in fact, been men, and the algorithm began taking this into account.

It began to penalize resumes that included the word “women,” meaning phrases like “volunteered with Women Who Code” would be marked against the applicant. It specifically targeted two all-women’s colleges, although sources would not tell Reuters which ones.

The company was able to edit the algorithm to eliminate these two particular biases. But a larger question arose—what other biases was the AI reinforcing that weren’t quite so obvious? There was no way to be sure. After several attempts to correct the program, Amazon executives eventually lost interest in 2017. The algorithm was abandoned.

The incident shows that because humans are imperfect, their imperfections can get baked into the algorithms built in hopes of avoiding such problems. AIs can do things we might never dream of doing ourselves, but we can never ignore a dangerous and unavoidable truth: They have to learn from us.

UPDATE, Oct 11: Amazon reached out through a spokesperson to PopMech with a statement, saying that “This was never used by Amazon recruiters to evaluate candidates.”

Source: Reuters


Leave A Reply