Cops Use AI to Arrest an Innocent Man & It Cost Them MILLIONS
.
.
When the Algorithm Says You’re Guilty: The Growing Threat of AI Facial Recognition Arrests
“I am not Mike. I don’t know anything about Mike.”
That was Jason Killinger’s plea as he stood handcuffed inside the Peppermill Casino in Reno, Nevada, in September 2023. Moments earlier, the casino’s facial recognition system had flagged him as a “100% match” for a man previously trespassed from the property. Security detained him. Police arrived. And despite Killinger presenting a valid government-issued ID proving his identity, officers chose to trust the algorithm.
Within hours, Killinger was in jail.
His case is not an isolated incident. It represents a growing national controversy over the use of artificial intelligence in policing—and the dangerous consequences when technology is treated as infallible.
The Illusion of “100% Match”
At the center of the arrest was the casino’s facial recognition software. According to security staff, the system declared Killinger a “100% match” for a prior trespasser. But experts in biometric technology are clear: facial recognition systems do not produce certainty. They generate similarity scores—probabilities based on mathematical comparisons between images.
When companies describe a result as “100%,” it is often marketing language, not scientific fact.

Facial recognition works by comparing one image against a database of many faces. The larger the database, the greater the chance of false positives—particularly when lighting conditions, camera angles, or image quality vary. Even slight distortions can alter how an algorithm evaluates facial features such as nose width, jawline contours, or eye spacing.
Unlike unlocking a smartphone—where one face is compared to one stored template—watchlist searches compare one face to potentially millions. The risk of error multiplies.
Yet in Killinger’s case, the police appeared to treat the software’s output as conclusive proof.
When Technology Overrides Common Sense
Body camera footage shows officers reviewing Killinger’s driver’s license. They even contacted dispatch to confirm both IDs associated with his name were legitimate and registered through the Nevada Department of Motor Vehicles. Instead of concluding the software was wrong, the officer speculated that Killinger might have a “hookup at the DMV” and somehow obtained two valid licenses under different names.
Probable cause under Illinois v. Gates requires officers to evaluate the “totality of the circumstances.” That means looking at all evidence—not just one factor. But in this case, the AI alert became the starting point and the finish line.
When Killinger asked whether a pay stub from his employer would help prove his identity, the officer declined. The only acceptable method, he said, would be fingerprinting at the jail.
In other words, despite multiple valid IDs and corroborating records, Killinger would have to lose his freedom before he could clear his name.
A Constitutional Gray Area
The Fourth Amendment protects individuals from unreasonable searches and seizures by the government. In Carpenter v. United States, the U.S. Supreme Court ruled that long-term GPS tracking by law enforcement constitutes a search requiring a warrant.
But what happens when surveillance begins with a private entity?
Casinos are private property. They can use facial recognition systems without violating the Constitution because the Fourth Amendment restricts government action—not private businesses. However, once police rely on information provided by a private system to arrest someone, constitutional scrutiny returns.
The issue is whether officers had probable cause independent of the AI alert.
Critics argue that relying almost exclusively on algorithmic output—while ignoring contrary evidence—violates due process. Due process requires notice and an opportunity to contest accusations. But how can someone challenge a facial recognition match when the system’s inner workings, error rates, and databases remain undisclosed?
The opacity of AI systems makes meaningful defense nearly impossible at the moment of arrest.
A Pattern of False Arrests
Killinger’s ordeal echoes other high-profile cases.
In Detroit, Robert Williams was arrested after facial recognition software linked him to a shoplifting incident based on a blurry surveillance image. He was detained in front of his family before police later admitted the error.
In New York City, Travis Williams reportedly spent two days in jail after AI surveillance misidentified him as a suspect—despite significant physical differences between him and the actual perpetrator.
These cases reveal a troubling pattern: once software flags someone, every ordinary behavior becomes suspicious. Walking away is interpreted as fleeing. Protesting innocence is labeled evasiveness. Even presenting valid identification can be reframed as evidence of fraud.
Rather than questioning the system, officers sometimes attempt to reshape the facts to fit it.
The Secrecy Problem
The problem extends beyond casinos.
Reports revealed that for nearly two years, the New Orleans Police Department used facial recognition technology through a private camera network—even after local lawmakers had imposed strict limits on its use. Alerts were sent directly to officers without judicial oversight. Defense attorneys were not always informed when facial recognition played a role in investigations.
Such practices raise serious transparency concerns. When AI tools are used without disclosure, defendants cannot challenge the reliability of the evidence against them.
The danger is not merely wrongful arrest—it is a justice system quietly reshaped by invisible algorithms.
The Human Cost
Killinger spent approximately 24 hours in custody. He later reported sprained shoulders and bruised wrists from being handcuffed. Although he was eventually cleared, the arrest remains on his record until it can be expunged after the statute of limitations expires.
For someone who had never been arrested before, the experience was traumatic.
Beyond the physical injuries lies reputational damage. Arrest records—even when charges are dropped—can affect employment, housing, and credit opportunities. The stigma lingers long after the mistake is acknowledged.
He has since filed a civil lawsuit alleging wrongful arrest and violation of his civil rights.
What Can Be Done?
Legal scholars and civil liberties advocates argue for immediate reforms:
No Arrest Based Solely on Facial Recognition
AI matches should be treated as investigative leads—not proof.
Mandatory Disclosure
Prosecutors should be required to disclose when facial recognition technology is used.
Independent Verification
Officers must corroborate AI results with traditional evidence before making an arrest.
Transparency and Auditing
Agencies should publish error rates and subject systems to independent testing.
Supreme Court Review
The U.S. Supreme Court has yet to directly address facial recognition under the Fourth Amendment. A clear ruling could define constitutional boundaries.
Individuals can also take precautionary steps, such as limiting publicly accessible facial images online, opting out of biometric programs when possible, and knowing their right to remain silent if detained.
A Warning for the Future
Artificial intelligence is advancing rapidly. Law enforcement agencies increasingly rely on automated tools—from facial recognition to predictive analytics. While these technologies promise efficiency, they also risk automating mistakes.
The central issue is not whether AI can be useful. It is whether human judgment is being replaced instead of supplemented.
Jason Killinger’s case serves as a warning: when an algorithm becomes more credible than a government-issued ID, the balance between technology and liberty has shifted too far.
The Constitution was designed to restrain government power. As AI continues to expand that power, courts and lawmakers must decide whether our civil liberties will keep pace—or whether more innocent people will find themselves saying, “That’s not me,” from the back of a patrol car.