Bias and Misidentification in AI Facial Recognition

In the research lab, my group decided to research the ethical issues related to AI facial recognition technology. In my research, I found that there are two major problems with the use of AI facial recognition technology. The problems are:

First, the facial recognition technology is not equally efficient for all people. Research indicates that the technology is more efficient for white male faces than for women or darker-skinned individuals. The National Institute of Standards and Technology conducted a study that indicated some programs have higher false positive rates for certain racial and ethnic groups. This means the system is more likely to incorrectly identify a person with another person’s face in the database.

Another problem with the use of facial recognition technology is the problem of misidentification, which can lead to the wrong person being arrested. There are already reports of the wrong person being arrested because the facial recognition system incorrectly identified the person. The American Civil Liberties Union (ACLU) reported many cases of the wrong person being arrested because the system incorrectly identified them.

The most surprising thing I learned was how widely this technology is already being used even though researchers have found major accuracy and fairness problems. This made me realize that AI systems like facial recognition can have serious real-world consequences when they are used in law enforcement and security systems.

  1. https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt (NIST)
  2. https://www.aclu.org/issues/privacy-technology/surveillance-technologies/face-recognition-technology

Leave a Reply