Photo by Microsoft Copilot on Unsplash

Last year, a man named Robert Williams spent 30 hours in a Detroit jail cell for a crime he didn't commit. The reason? A facial recognition algorithm flagged him as a suspect in a shoplifting case. The system was wrong. Williams was released, but not before his mugshot was taken, his fingerprints recorded, and his reputation damaged. He became a name on a growing list of people harmed by AI systems that most citizens don't even know are scanning their faces in public.

This isn't a theoretical problem anymore. Facial recognition technology has advanced to the point where it can identify people with 99.97% accuracy under ideal conditions—better than most human observers. Yet here's the paradox: the better these systems get, the more urgent our need to discuss when, where, and whether they should be used at all.

The Accuracy Explosion That Caught Everyone Off Guard

The jump in facial recognition accuracy over the past five years has been staggering. In 2013, the National Institute of Standards and Technology (NIST) tested leading algorithms and found error rates around 4%. By 2023, that same benchmark showed error rates of just 0.08% for the best-performing systems. That's roughly a 50-fold improvement in a single decade.

What changed? Deep learning. Specifically, convolutional neural networks trained on massive datasets containing millions of facial images. When you feed algorithms billions of labeled faces, they start recognizing patterns that humans can't consciously articulate—the precise angle of an eye socket, the subtle contours of cheekbones, the way light reflects across specific skin textures.

Companies like Clearview AI scraped billions of photos from social media platforms, dating apps, and mugshot databases to train their systems. Their facial recognition database now contains over 20 billion images. When law enforcement runs a photo through their API, they get instant matches. Fast. Accurate. Terrifying, if you care about consent.

Accuracy Doesn't Equal Fairness (And Here's Why)

Here's where the story gets complicated. While facial recognition systems have reached impressive overall accuracy rates, the same NIST study revealed something troubling: these systems are not equally accurate across all faces. Algorithms performed best on men with lighter skin tones and significantly worse on women with darker skin tones—in some cases, error rates were 100 times higher.

This isn't because the algorithms are intentionally biased. It's because the training data was biased. Most facial recognition datasets overrepresent people with lighter skin tones. When you train a system predominantly on one type of face, it becomes exceptionally good at recognizing that face and significantly worse at recognizing others. It's like training a dog to recognize the mailman and then being surprised it doesn't recognize the garbage collector.

A woman named Nia Wilson was arrested based on a facial recognition match—the system was nearly certain it was her. It wasn't. Joy Buolamwini, an MIT researcher, has documented case after case of these failures, particularly affecting women of color. She's also demonstrated that some commercial systems had error rates exceeding 34% for darker-skinned women, compared to less than 1% for lighter-skinned men.

The uncomfortable truth is this: as these systems become more accurate overall, they're simultaneously becoming more dangerously inaccurate for specific populations.

Who Gets to Decide When Your Face Becomes Evidence?

Let's say a facial recognition system correctly identifies you at a protest, a political rally, or a clinic for reproductive health services. The accuracy is perfect. Does that make it right? That's the real question nobody in Congress seems eager to answer.

The FBI has run facial recognition searches on over 641 million photographs, according to internal documents. Most of those photos come from driver's licenses and state ID databases—photos taken for the purpose of getting a license, not for surveillance. Yet they're being used as a general-purpose identification system with minimal oversight.

Only a handful of states have passed regulations around facial recognition. California banned it from police body cameras. San Francisco and several other cities prohibited city government use. But these are exceptions. Federal law? Virtually nonexistent. The technology has outpaced regulation by years.

This is where accuracy becomes almost irrelevant. AI systems have their own failure modes that we're still learning to understand, but facial recognition presents a unique problem: even when it works perfectly, the question of whether it should be used remains unanswered.

The Future: More Accurate, or More Equitable?

The conversation shouldn't be about whether facial recognition will get better. It will. The question is whether we'll decide it's acceptable to use it before we've fixed the fairness problem. And honestly? The incentives are misaligned.

Governments love facial recognition because it makes their jobs easier. Law enforcement can move faster. Border agents can process travelers quicker. Tech companies love it because it's profitable. Social media platforms can use it for content recommendations. Retailers can track customer behavior.

Meanwhile, the people being identified—tracked, flagged, arrested—didn't consent to any of it. We didn't opt into this system. It happened while we were busy with our lives.

Robert Williams eventually got an apology and a settlement. But the system that misidentified him? It's still running. It's still scanning faces. And depending on the color of your skin and your proximity to certain places, it's making decisions about your life without your knowledge or permission.

Maybe that's the real issue with facial recognition. It's not that it's inaccurate. It's that it's accurate enough to matter, while remaining simultaneously unjust enough to harm innocent people. That's a combination technology shouldn't enable without democratic consensus first.