Angela Lipps, a 50-year-old Tennessee grandmother, had never set foot in North Dakota. She had never been on an airplane before her extradition. In July 2025, she was arrested in Tennessee on a Fargo warrant, over 1,000 miles from home, for bank fraud charges tied to a state she said she had never visited.[s]
Lipps spent more than five months in jail, first in Tennessee before extradition and then in North Dakota. Her lawyers said bank records showed she was in Tennessee during the alleged frauds and that officers had not investigated whether she traveled to North Dakota. The case began with a Clearview AI lead from a facial recognition startup with billions of scraped photos, and her attorneys described the detention as what they believe is the longest AI-related wrongful detention case in U.S. history.[s][s]
Lipps is one of at least 14 documented Americans wrongfully arrested after police relied on erroneous facial recognition results.[s] The pattern is consistent: an algorithm generates a lead, officers skip basic verification, and a person can lose months or years of their life. The technology creates what might be called a “digital twin,” a statistical profile that follows suspects into courtrooms and parole hearings, often replacing the actual investigation that due process requires.
Predictive Policing Algorithms Replace Investigation
The promise of predictive policing algorithms was efficiency: computers processing vast datasets to identify patterns humans would miss. The reality has been something closer to automated accusation. Police departments across the country have adopted AI-driven data fusion tools to compile and analyze police and surveillance data, while algorithmic systems also influence court decisions around bail, sentencing, and parole.[s][s]
The Brennan Center for Justice has documented how modern data fusion platforms differ fundamentally from traditional police databases. Where older systems provided information for officers to evaluate, today’s AI tools “automatically generate conclusions for police, supplying those determinations without context or explanation.”[s] Platforms like Cognyte NEXYTE, C3.ai, Peregrine, and Flock Safety Nova can aggregate arrest records, license plate readers, social media, gunshot detection, and facial recognition into unified profiles. Wrongful-arrest cases show officers can treat algorithmic output as probable cause rather than a lead requiring verification.
This represents a fundamental shift in how guilt is established. Traditional policing required officers to develop a theory of the case, gather evidence, and build probable cause through investigation. Predictive policing algorithms invert this process: the system generates a conclusion, and the investigation, if it happens at all, works backward to justify it.
The Bias Built Into the Machine
The problems with these systems extend beyond procedural shortcuts. The data feeding predictive policing algorithms reflects decades of discriminatory enforcement. A 2016 ProPublica investigation found that COMPAS, a widely used risk assessment tool, falsely flagged Black defendants as future criminals at almost twice the rate of white defendants.[s]
The Harvard Human Rights Journal, in an award-winning 2025 analysis, explained why this bias is structural rather than incidental. Actuarial risk assessment instruments “are doubly racialised: first, they include input factors that are based on the norms of whiteness, and second, they use highly racialised risk factors such as criminal history.”[s] When communities have been over-policed for generations, their arrest records reflect that over-policing, not higher rates of actual criminality. Algorithms trained on this data learn to flag those same communities as high-risk, creating a feedback loop that deepens inequality.
Facial recognition compounds these disparities. The landmark 2018 Gender Shades study found that commercial systems showed error rates of just 0.8% for light-skinned men but 34.7% for darker-skinned women, a 40-fold disparity.[s] Unsurprisingly, most known wrongful arrests from facial recognition have been of Black people.[s] The technology treats minorities reduced to objects in a probability matrix, their individual circumstances erased by statistical generalizations.
Due Process in the Age of Algorithms
The constitutional implications are severe. Legal systems built on due process require decisions that can be “explained, contested, and justified,” as legal scholar Tuğba Tosun Çobanoğlu wrote in JURIST. Yet many predictive systems “operate as proprietary ‘black boxes,’ meaning that even judges and defendants may not fully understand how a particular risk score was produced. When liberty is at stake, such opacity becomes deeply problematic.”[s]
The Law Society of England and Wales conducted an extensive review of algorithmic systems in criminal justice and found “significant challenges of bias and discrimination, opacity and due process.”[s] These are not edge cases or implementation failures. They are features of systems designed to generate conclusions without the messy, time-consuming work of actual investigation.
The result is that “the individual defendant becomes less a person before the law and more a data point within a predictive model.”[s] This is the core philosophical problem: algorithmic control over their fates reduces defendants to profiles rather than persons, statistical likelihoods rather than individuals with rights.
The Regulatory Vacuum
Police departments are deploying these technologies faster than lawmakers can understand them. North Dakota, where Lipps was wrongfully detained, has no legislation governing AI in police investigations. The state represents a “regulatory wild west” where “all kinds of tech products are being tested, with minimal transparency.”[s]
Ian Adams, an assistant professor of criminology at the University of South Carolina, told CNN that police are adopting AI “so quickly that all agencies really have to rely on is vendor promises.”[s] When mistakes happen, they tend to involve both technology and human failures: officers deferring to computer outputs rather than conducting basic investigation. In Lipps’ case, her lawyers said exculpatory bank records were readily available and that AI facial recognition had been used as a shortcut for basic investigation.
More than 20 cities have banned police use of facial recognition entirely. Detroit, following a landmark settlement in a wrongful arrest case, no longer permits arrest warrants based solely on facial recognition and a photo lineup. Indiana has enacted similar protections into state law.[s] But these remain exceptions, and many jurisdictions lack comparable rules.
What Would Reform Look Like
The ACLU and civil liberties groups have called for transparency requirements: public inventories of AI tools, mandatory disclosure of algorithmic evidence, and prohibitions on using facial recognition as the sole basis for arrest. Some states have begun responding. Arizona adopted rules limiting AI use in courts. Nevada created guidelines for judicial officers. Arkansas prohibited exposing court data to generative AI systems.[s]
But regulation may not be enough. The Minnesota Journal of Law and Inequality concluded that predictive policing vendors “have created products where discrimination is a feature, not a bug.”[s] Systems trained on biased data will produce biased outcomes regardless of how carefully they are deployed. The question is whether the efficiency gains from predictive policing algorithms justify the constitutional costs, and whether a justice system that reduces people to probability scores can still be called just.
Biometric Update reported that unpaid bills after Lipps’ detention eventually led to the loss of her house, her car, and her dog.[s] She will never get those five months back. “I’ll never go back to North Dakota,” she told local news after her release on Christmas Eve.[s] Facial recognition remains in police use despite the growing list of documented wrongful arrests.[s]
Angela Lipps, a 50-year-old Tennessee resident, was arrested on July 14, 2025, in Tennessee on an outstanding warrant from Cass County, North Dakota. The warrant, signed by a North Dakota judge on July 1, 2025, authorized nationwide extradition for multiple felony charges including theft and unauthorized use of personal identifying information.[s]
The charges stemmed from bank fraud incidents in the Fargo area. West Fargo Police Department ran surveillance images through Clearview AI, a facial recognition system with a database of billions of photos scraped from social media and the internet. Clearview “identified a potential suspect with similar features to Angela Lipps,” and West Fargo shared this report with Fargo police.[s] Lipps’ attorneys said officers had not determined whether she traveled to or was in North Dakota at the time of the bank thefts. Bank records subsequently showed she was in Tennessee during the alleged frauds.
Lipps spent over three months in Tennessee jail before being extradited, then was transported to North Dakota, her first time on an airplane. On December 12, 2025, the State’s Attorney’s Office informed the Fargo detective that the defense had produced “potential exculpatory evidence.” Charges were dismissed December 23, and Lipps was released Christmas Eve; Biometric Update reported she was stranded in Fargo with no money to get home.[s][s] Her attorneys believe this constitutes “the longest AI-related wrongful detention case in U.S. history.”[s]
Predictive Policing Algorithms: Scale of Adoption
The Lipps case exemplifies systemic failures across the expanding landscape of AI-assisted law enforcement. Brennan Center researchers report that police departments across the country have adopted AI-driven data fusion tools to compile and analyze data, and that newer systems are becoming accessible even to less-resourced departments.[s]
The ACLU has documented at least 14 wrongful arrests attributable to facial recognition misidentification:
- Nijeer Parks, Woodbridge, New Jersey (February 2019)
- Michael Oliver, Detroit, Michigan (July 2019)
- Robert Williams, Detroit, Michigan (January 2020)
- Christopher Gatlin, St. Louis, Missouri (August 2021)
- Alonzo Sawyer, Maryland (March 2022)
- Randal Quran Reid, warrant from Jefferson Parish, Louisiana (November 2022)
- Porcha Woodruff, Detroit, Michigan (February 2023)
- Jason Killinger, Reno, Nevada (September 2023)
- Robert Dillion, Jacksonville Beach, Florida (August 2024)
- Javier Lorenzano-Nunez, Phoenix, Arizona (October 2024)
- Trevis Williams, New York City (April 2025)
- Angela Lipps, warrant from Fargo, North Dakota (July 2025)
- Beau Burgess, Orlando, Florida (August 2025)
- Kimberlee Williams, warrant from Maryland (June 2021, case publicly reported 2026)
Common factors include police failure to verify alibis, treating algorithmic matches as definitive rather than investigative leads, and jurisdictional distance between the alleged crime and the accused’s residence.[s]
Technical Architecture of Data Fusion Platforms
Modern predictive policing algorithms operate through data fusion platforms that aggregate multiple surveillance streams. The Brennan Center for Justice highlighted platforms including Cognyte’s NEXYTE “decision intelligence platform,” C3.ai’s C3 AI Law Enforcement, Peregrine’s machine-driven data integration platform, and Flock Safety’s Nova public safety platform.[s]
These systems differ fundamentally from traditional police databases. “Whereas traditional police databases provide departments with information that officers could assess to develop a theory or reach a conclusion, today’s data fusion tools automatically generate conclusions for police, supplying those determinations without context or explanation.”[s]
Data inputs typically include: arrest records, license plate reader data, facial recognition matches, social media monitoring, gunshot detection alerts, public records, and video analytics from body cameras, dashcams, and stationary surveillance networks. Peregrine claims its platform can “integrate data of any type, from any source, at any scale.”[s]
Documented Bias in Algorithmic Systems
The 2018 Gender Shades study by Joy Buolamwini (MIT Media Lab) and Timnit Gebru (then Microsoft Research) tested commercial facial recognition systems and found error rates of 0.8% for light-skinned men versus 34.7% for darker-skinned women, a 40-fold disparity. A 2019 NIST study of 189 facial recognition algorithms from 99 developers found African American and Asian faces were 10 to 100 times more likely to be misidentified than white male faces.[s]
For risk assessment instruments, the 2016 ProPublica investigation of COMPAS found that the formula falsely flagged Black defendants as future criminals at almost twice the rate of white defendants. ProPublica also reported that, after controlling for criminal history, recidivism, age, and gender, Black defendants were still more likely to be pegged as higher risk.[s]
The Harvard Human Rights Journal’s 2025 analysis identified two sources of racial bias in actuarial risk assessment instruments (ARAIs): “first, they include input factors that are based on the norms of whiteness, and second, they use highly racialised risk factors such as criminal history.” The result: “facially neutral risk factors become pernicious proxies for race.”[s]
This bias has operational consequences. ARAIs “erode the right to liberty by justifying indeterminate sentences for individuals deemed high-risk,” with minorities subjected to “increased surveillance, harsher sentences and reduced likelihood of parole or bail, resulting in far greater deprivation of liberty.”[s] The algorithm functions like a modern form of seizing property without criminal charges, where liberty rather than assets is taken on probabilistic grounds.
Pre-Crime Policing and Fourth Amendment Implications
Beyond reactive facial recognition, predictive policing algorithms are being deployed for prospective surveillance. A Brookings Institution report documented a Florida case where a minor was “hounded by law enforcement due to an algorithm concluding that they were likely to break the law.” Despite having committed no serious offense, “officers began visiting his parents’ home without warning to question him, occasionally appearing multiple times a day.” The family eventually moved to escape the harassment.[s]
The Minnesota Journal of Law and Inequality observed that predictive policing vendors “have created products where discrimination is a feature, not a bug.”[s] Systems trained on historical arrest data from over-policed communities will flag those communities for future policing, generating new arrests that further weight the algorithm toward those same communities.
Automation Bias and Investigative Shortcuts
A significant factor in wrongful arrests is “automation bias,” the documented tendency for human operators to defer to computer outputs. Research shows that fingerprint examiners were influenced by the order in which computer systems presented potential matches. In facial recognition cases, police have treated algorithmic outputs as “100% match” or used software to “immediately and unquestionably” identify suspects, despite policy disclaimers warning that such results require independent verification.[s]
In Lipps’ case, Fargo Police Chief Dave Zibolski acknowledged that his department had unknowingly relied on West Fargo’s Clearview AI system and “would not have allowed that to be used.” The department has since prohibited using West Fargo’s AI system and pledged monthly reviews of all facial recognition identifications.[s]
Due Process and Human Dignity Concerns
The Law Society of England and Wales reviewed algorithmic systems in criminal justice and found “significant challenges of bias and discrimination, opacity and due process.”[s]
JURIST commentary identified the core constitutional tension: “Many predictive systems operate as proprietary ‘black boxes,’ meaning that even judges and defendants may not fully understand how a particular risk score was produced. When liberty is at stake, such opacity becomes deeply problematic.”[s]
The result reduces defendants to data points within predictive models. Through algorithmic control over their fates, individuals are evaluated not “primarily on the basis of their personal actions, but through patterns derived from large datasets.”[s] This constitutes what legal scholars describe as minorities reduced to objects of statistical management rather than subjects of rights.
Current Regulatory Landscape
North Dakota has no legislation governing AI in police investigations.[s] This regulatory vacuum is common nationally.
Some jurisdictions have acted:
- More than 20 cities have banned police facial recognition
- Detroit prohibits arrest warrants based solely on facial recognition plus photo lineup (settlement in Robert Williams case)
- Indiana enacted statutory protections against facial recognition-only warrants
- Arizona adopted rules limiting AI use in courts (2024)
- Nevada created AI guidelines for judicial officers (2025)
- Arkansas prohibited exposing court data to generative AI (2025)
- Kerala, India prohibited judicial officers from using AI for decision-making or legal reasoning
The federal government has categorized similar AI tools as having “significant implications for civil rights, civil liberties, and privacy” requiring safeguards including training, pre-deployment testing, impact assessments, and ongoing monitoring.[s]
The Fundamental Question
Whether the efficiency gains from predictive policing algorithms justify their constitutional costs remains unresolved. Independent studies assessing whether these tools contribute measurably to public safety are scarce.[s] What is documented is a pattern of wrongful arrests, biased outcomes, and opacity that undermines the individualized judgment due process requires.
Angela Lipps’ attorneys stated: “Officers knew that Angela was a Tennessee resident, and we have seen no investigation by officers to determine whether she traveled to or was in North Dakota at the time of the bank thefts. Instead, an officer used AI facial recognition as a shortcut for basic investigation, resulting in an innocent woman being detained and transported halfway across the country to answer for charges that she had nothing to do with.”[s]
Fargo police have not issued a direct apology. The case remains “open and active” with the possibility that charges “may be refiled if additional investigation supports doing so.”[s] Biometric Update reported that unpaid bills after her detention eventually led to the loss of her house, car, and dog.[s] She is exploring civil rights claims but has yet to file suit. Facial recognition technology remains in use in policing despite the documented wrongful-arrest cases.[s]



