Researchers Say Amazon’s Face Recognition Technology Shows many Gender and Racial Biases

Amazon’s facial recognition technology ‘Rekognition’ is found to have serious loopholes in identifying women more particularly those with darker skin. Amazon’s technology is currently being used by law enforcement agencies in US. The new revelation comes from Massachusetts Institute of Technology and University of Toronto, two of the most reputed technological universities.

The technology is receiving criticism from civil rights and privacy advocates as it has worried them about the possible misuse of the technology to promote discrimination against minorities. Amazon’s investors have also cautioned the company as the technology makes Amazon vulnerable to potential lawsuits.

The researchers stated that darker skinned women were labelled as men 31% of the time while lighter skinned women were misidentified only 7% of the time. The hiked error rate in case of dark-skinned women has troubled researchers about the horrifying gender and race biases ingrained in the facial recognition technology. Darker skinned men had a very low error rate off 1%. The use of the technology by the law enforcement authorities further adds to the concerns about the severity of the issue.

Artificial Intelligence often mimics the ingrained biases of their human creators. Technological biases can be less threatening but when AI –based technologies are adopted to solve real-world problems, there are eminent dangers about its possible misuse. The report published in January by the group of researchers warns about the possible dangers associated with the wide scale use of facial recognition technology which has inherited biases on such a scale. They have also warned about consequences of the technology on privacy and civil liberty issues.

The general manager of Artificial Intelligence at AWS, Matt Wood replied in a blog post that the technology is not a facial recognition technology rather a facial analysis technology. The facial recognition feature matches individual faces in images and videos. He hit back at the MIT report saying that the findings by the researchers is based on earlier versions of the technology and a lot of developments have happened since then.

Amazon said that it has recommended a confidence threshold of 99% to the enforcement departments. Confidence threshold being the percentage of confidence shown by the system on the output result. Police departments do not really pay attention to the confidence threshold as one sheriff from Oregon’s Washington County Sheriff office, said that they don’t utilize confidence threshold at all. As there is no governmental regulation about the use of AI, the police have the liberty to use it as they want.  

Buolamwini, a researcher from the MIT Media Lab said in a blog post on the lab’s medium post that companies should check their facial recognition technologies for the implicit biases before marketing the technology to clients. He added that they chose to study Amazon’s Rekognition technology as it is being marketed to law enforcement departments. Recently Microsoft and IBM have made great advancements in their facial recognition technologies. IBM recently added one million faces to its dataset which were taken from Creative Commons from Flickr. IBM claims that the addition of a million faces has diversified representation in the dataset to minimize the biases.

Author: Abhishek Budholiya

Abhishek Budholiya is a tech blogger, digital marketing pro, and has contributed to numerous tech magazines. Currently, as a technology and digital branding consultant, he offers his analysis on the tech market research landscape. His forte is analysing the commercial viability of a new breakthrough, a trait you can see in his writing. When he is not ruminating about the tech world, he can be found playing table tennis or hanging out with his friends.