Skip to content Skip to navigation

Stanford researchers find that automated speech recognition is more likely to misinterpret black speakers

digital illustration of sound waves emerging from a mobile device
Image credit: Getty Images
Mar 23 2020
Fellow, Research, Stanford, Students

The technology that powers the nation’s leading automated speech recognition systems makes twice as many errors when interpreting words spoken by African Americans as when interpreting the same words spoken by whites, according to a new study by researchers at Stanford Engineering.

While the study focused exclusively on disparities between black and white Americans, similar problems could affect people who speak with regional and non-native-English accents, the researchers concluded.

If not addressed, this translational imbalance could have serious consequences for people’s careers and even lives. Many companies now screen job applicants with automated online interviews that employ speech recognition. Courts use the technology to help transcribe hearings. For people who can’t use their hands, moreover, speech recognition is crucial for accessing computers.

STudy lead author Allison Koenecke is a 2016 EDGE-STEM Fellow. Co-author Zion Ariana Mengesha is a 2017 EDGE-SBEH Fellow and EDGE Mentor.

Read THE FULL ARTICLE