The AI Bias We Carry in Our Pockets
The technology that millions of Americans encounter daily from unlocking phones to airport security checkpoints carries hidden biases that can destroy lives. At least eight people have been wrongfully arrested in the United States after being identified through facial recognition, with nearly every known case involving a Black person (American Civil Liberties Union, 2024).
The National Institute of Standards and Technology (NIST) studied 189 facial recognition algorithms from 99 developers and found that most exhibit bias, with facial recognition technologies falsely identifying Black and Asian faces 10 to 100 times more often than they did white faces (Grother et al., 2019). These algorithms also falsely identified women more than men, making Black women particularly vulnerable to algorithmic bias.
Facial Recognition Rates Higher for Women and Darker-Skinned Individuals
In a study conducted by researcher, Dr. Joy Buolamwini at MIT that tested commercial facial analysis tools from IBM, Microsoft, and Face++ for gender classification using images of individuals from the Pilot Parliaments Benchmark, the results revealed facial recognition technology has biased error rates as high as 0.8% for light-skinned men, 12.1% for dark-skinned men, 7.5%, for light-skinned women and 34.7% dark-skinned women (Buolamwini & Gebru, 2018).
The study used a balanced number of images across gender, skin tones, and ethnicities but this dramatic difference in accuracy has rapidly become a crisis unfolding in real time across the American criminal justice system.
When AI Gets It Wrong and You’re Arrested
Robert Williams thought it was a prank call when his wife received word that he needed to turn himself in to police. But when the Michigan resident pulled into his driveway in January 2020, a Detroit police officer was waiting and placed him under arrest for allegedly stealing thousands of dollars worth of Shinola watches (NPR, 2020).
Williams was detained for 30 hours based on facial recognition software that incorrectly matched grainy surveillance footage to his expired driver’s license photo. Though charges were eventually dropped due to insufficient evidence, Williams’ case became the first documented wrongful arrest in the U.S. due to facial recognition technology (American Civil Liberties Union, 2024).
“I would say that it doesn’t just affect the person who is arrested. I have a whole family, and they were also affected by this,” Williams explains. “Four years ago, I would have thought that I was on board with the use of facial recognition technology. But now I think they have a long way to go because there are so many ways that it could go wrong” (University of Michigan Law School, 2024).
The pattern repeats with disturbing frequency. Consider Randal Reid, a Georgia resident who had never even been to Louisiana but was wrongfully arrested based on faulty Louisiana facial recognition technology and held for nearly a week in jail (American Civil Liberties Union, 2024). There are at least seven confirmed cases of misidentification due to facial recognition technology, six of which involve Black people: Nijeer Parks, Porcha Woodruff, Michael Oliver, Randall Reid, Alonzo Sawyer, and Robert Williams (Innocence Project, 2025).

The Science Behind the Bias
Understanding why facial recognition fails certain groups requires examining how these systems learn. Modern facial recognition technology relies on neural networks that identify features the computer considers relevant to distinguishing faces. Racial bias is most prevalent in the selection of images used to train the algorithm, and a racially unbiased technology would require equal racial representation within the dataset (Harvard Journal of Law & Technology, 2020).
The popular open source facial image dataset “Labeled Faces in the Wild” is 83.5% white. Even the NIST-constructed IJB-A dataset, specifically created for geographical diversity, contains 79.6% images of faces with lighter skin tones (Harvard Journal of Law & Technology, 2020). When algorithms learn primarily from white faces, they struggle to accurately identify people of color.
“Devices have been created in ignorance of the population they are going to interact with, intentionally or not,” explains Ifeoma Nwogu, associate professor of computer science and engineering at the University at Buffalo School of Engineering and Applied Sciences (University at Buffalo, 2024).
Image quality adds another layer of bias. Problems like underexposure reduce image quality more dramatically for darker skin tones. Historically, camera technology has been calibrated for light skin only, and many image quality issues persist only for those with darker skin (Harvard Journal of Law & Technology, 2020).
Conflicting Claims of Facial Recognition Accuracy
The facial recognition industry disputes claims of widespread bias, pointing to recent improvements in technology. According to evaluation data from January 22, 2024, each of the top 100 algorithms are over 99.5% accurate across Black male, white male, Black female and white female demographics. For the top 60 algorithms, accuracy of the highest performing demographic versus the lowest varies only between 99.7% and 99.85% (Security Industry Association, 2024).
The Department of Homeland Security reports that for some CBP use cases, there were very small differences in measured face matching performance based on skin tone and self-reported race and age, ranging from less than 1% to 2-3%, with the lowest success rate for any demographic group at 97% (Department of Homeland Security, 2025).
However, critics argue these industry studies don’t reflect real-world conditions. “We intend for this to be able to inform meaningful discussions and to provide empirical data to decision makers, policy makers and end users to know the accuracy, usefulness, capabilities [and] limitations of the technology,” says Craig Watson, an Image Group manager at NIST (Scientific American, 2024).
The disconnect between laboratory testing and street-level policing remains stark. Police departments in 15 states provided records documenting their use of facial recognition in more than 1,000 criminal investigations over the past four years, and authorities routinely failed to inform defendants about their use of the software (The Washington Post, 2024).
Law Enforcement’s Dangerous Overreliance on Faulty Facial Recognition Software
The Post reviewed documents from 23 police departments and found that 15 departments spanning 12 states arrested suspects identified through AI matches without any independent evidence connecting them to the crime contradicting their own internal policies requiring officers to corroborate all leads found through AI (The Washington Post, 2025).
Some law enforcement officers appeared to abandon traditional policing standards and treat software suggestions as facts. One police report referred to an uncorroborated AI result as a “100% match.” Another said police used the software to “immediately and unquestionably” identify a suspected thief.
“When police move directly from a facial recognition result to a witness identification, those steps often exacerbate and compound the unreliability of face recognition searches,” notes the American Civil Liberties Union in a recent federal submission (American Civil Liberties Union, 2024).
“Once the facial recognition software told them I was the suspect, it poisoned the investigation,” says Robert Williams. “This technology is racially biased and unreliable and should be prohibited” (CalMatters, 2024).
Project NOLA Facial Recognition CCTV
New Orleans has become the first known American city to rely on live facial recognition technology cameras at scale through the “Project NOLA” private camera network. These cameras scan every face that passes by and send real-time alerts directly to officers’ phones when they detect a purported match to someone on a secretive, privately maintained watchlist (American Civil Liberties Union, 2024).
The expansion of this technology occurs with minimal oversight. Officers often obscured their reliance on facial recognition in public-facing reports, saying they identified suspects “through investigative means” or that a human source made the initial identification. The Coral Springs Police Department in South Florida instructs officers not to reveal the use of facial recognition in written reports (The Washington Post, 2024).
Systemic Impact on Communities of Color
African American males are disproportionately represented in the mugshot databases many law enforcement facial recognition systems use for matching. Even if an algorithm shows no difference in its accuracy between demographics, its use could still result in a disparate impact if certain groups are over-represented in databases (Center for Strategic and International Studies, 2020).
“Studies show that facial recognition is least reliable for people of color, women, and nonbinary individuals. And that can be life-threatening when the technology is in the hands of law enforcement,” warns the ACLU of Minnesota (ACLU of Minnesota, 2024).
Research shows that facial recognition software is significantly less reliable for people of color, especially Black and Asian people, as algorithms struggle to distinguish facial features and darker skin tones. Disproportionate arrests of Black people by law enforcement agencies using FRT may be the result of “the lack of Black faces in the algorithms’ training data sets, a belief that these programs are infallible and a tendency of officers’ own biases to magnify these issues” (Innocence Project, 2025).
Solutions to Eliminate Bias in Facial Recognition Systems
Technical improvements to facial recognition algorithms focus primarily on diversifying training data. The most critical factor in reducing bias is ensuring algorithms learn from datasets that adequately represent all demographic groups. If algorithms are trained on datasets that contain very few examples of a particular demographic group, the resulting model will be worse at accurately recognizing members of that group in real world deployments (Center for Strategic and International Studies, 2020).
Regulatory frameworks are beginning to address these technical shortcomings. The EU recently proposed that high-risk AI systems like facial recognition must include requirements that training data be “sufficiently broad,” and reflect “all relevant dimensions of gender, ethnicity and other possible grounds of prohibited discrimination” (Center for Strategic and International Studies, 2020).
However, policy solutions face implementation challenges. While more than a dozen cities including Minneapolis and Boston have banned facial recognition technology outright (ACLU of Minnesota, 2024), enforcement remains inconsistent. Some jurisdictions with restrictions still allow workarounds, such as police departments requesting facial recognition searches from neighboring cities that lack similar prohibitions (CalMatters, 2024). These gaps highlight the need for comprehensive, coordinated policy approaches rather than patchwork local regulations.
Key Takeaways
As this technology becomes more pervasive, the stakes grow higher. The choice facing policymakers, law enforcement, and technology companies is clear: address these biases now, or watch algorithmic discrimination become institutionalized across society.
1. Technical bias is measurable and persistent. Facial recognition algorithms consistently show higher error rates for darker-skinned individuals and women, with documented cases of 34.7% error rates for dark-skinned women compared to 0.8% for light-skinned men.
2. Law enforcement overreliance creates systemic problems. Police departments across multiple states have made arrests based solely on algorithmic matches without corroborating evidence, leading to at least eight documented wrongful arrests.
3. Current oversight is insufficient. Most facial recognition systems operate without independent auditing, transparency requirements, or accountability mechanisms, allowing biased outcomes to persist unchecked.
References
- Fergus, R. ACLU of Minnesota. (2024, February 29). Biased Technology: The Automated Discrimination of Facial Recognition.
- Wessler, NF. American Civil Liberties Union. (2024, April 30). Police Say a Simple Warning Will Prevent Face Recognition Wrongful Arrests. That’s Just Not True.
- American Civil Liberties Union. (2024, July 2). Williams v. City of Detroit.
- Buolamwini, J., & Gebru, T. (2018, February 12). Study finds gender and skin-type bias in commercial artificial-intelligence systems. MIT News.
- CalMatters. (2024, June 24). These wrongly arrested Black men say a California bill would let police misuse face recognition.
- Center for Strategic and International Studies. (2020). The Problem of Bias in Facial Recognition.
- Department of Homeland Security. (2025, January 16). 2024 Update on DHS’s Use of Face Recognition & Face Capture Technologies.
- Grother, P., Ngan, M., & Hanaoka, K. (2019, December). NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software. National Institute of Standards and Technology.
- Harvard Journal of Law & Technology. (2020, November 3). Why Racial Bias is Prevalent in Facial Recognition Technology.
- Innocence Project. (2025, January 21). AI and The Risk of Wrongful Convictions in the U.S.
- NPR. (2020, June 24). ‘The Computer Got It Wrong’: How Facial Recognition Led To False Arrest Of Black Man.
- PhotoAiD. (2025, March 9). 37+ Facial Recognition Statistics for 2025.
- Scientific American. (2024, February 20). How NIST Tested Facial-Recognition Algorithms for Racial Bias.
- Security Industry Association. (2024, March 18). What Science Really Says About Facial Recognition Accuracy and Bias Concerns.
- The Washington Post. (2024, October 6). Police seldom disclose use of facial recognition despite false arrests.
- The Washington Post. (2025, January 13). Arrested by AI: Police ignore standards after facial recognition matches.
- University at Buffalo. (2024, February 21). UB computer science professor weighs in on bias in facial recognition software.
- University of Michigan Law School. (2024). Flawed Facial Recognition Technology Leads to Wrongful Arrest and Historic Settlement.