Computer scientists recently tested how well leading facial analysis services detect a person’s gender identity, and they were not happy with the results.
The researchers, from the University of Colorado Boulder, used various AI-based tools — including Amazon’s Rekognition, IBM’s Watson, Microsoft’s Azure and Clarifai— to analyze 2,450 photos of faces belonging to cisgender, transgender and otherwise gendered people. They gathered the images on Instagram by searching the hashtags #woman, #man, #transwoman, #transman, #agenderqueer, and #nonbinary.
The hashtags were “crowdsourced” exclusively from “queer, trans, and/or non-binary individuals,” they noted.
According to the researchers, “prior identity scholarship” has established that gender is a “fluid social construct.” Other studies have shown that facial analysis suffers from both gender and racial bias.
The MIT Media Lab in January found that Amazon’s Rekognition tool misidentified darker-skinned women as men one-third of the time. The software also mislabeled white women as men at higher rates than white men.
In an update to its website in September, Amazon said that Rekognition should not be used to “categorize a person’s gender identity” and is better suited to analyzing sets of photos to answer questions like “the percentage of female users compared to male users on a social media platform,” Quartz reported.
Amazon, IBM, Microsoft and Clarifai did not immediately respond to Pluralist’s interview requests. Nor did assistant professor of information science Jed Brubaker, who oversaw the study.