These cities bar facial recognition tech. Police still found ways to access it.

Facial-Recognition Technology, Security Camera

As cities and states push to restrict the use of facial recognition technologies, some police departments have quietly found a way to keep using the controversial tools: asking for help from other law enforcement agencies that still have access.

Officers in Austin and San Francisco - two of the largest cities where police are banned from using the technology - have repeatedly asked police in neighboring towns to run photos of criminal suspects through their facial recognition programs, according to a Washington Post review of police documents.

Subscribe to The Post Most newsletter for the most important and interesting stories from The Washington Post.

In San Francisco, the workaround didn’t appear to help. Since the city’s ban took effect in 2019, the San Francisco Police Department has asked outside agencies to conduct at least five facial recognition searches, but no matches were returned, according to a summary of those incidents submitted by the department to the county’s Board of Supervisors last year.

SFPD spokesman Evan Sernoffsky said these requests violated the city ordinance and were not authorized by the department, but the agency faced no consequences from the city. He declined to say whether any officers were disciplined because those would be personnel matters.

Austin police officers received the results of at least 13 face searches from a neighboring police department since the city’s 2020 ban - and appeared to get hits on some of them, according to documents obtained by The Post through public records requests and sources who shared them on the condition of anonymity.

“That’s him! Thank you very much,” one Austin police officer wrote in response to an array of photos sent to him by an officer in Leander, Tex., who ran a facial recognition search, documents show. The man displayed in the pictures, John Curry Jr., was later charged with aggravated assault for allegedly charging toward someone with a knife, and is currently in jail awaiting trial. Curry’s attorney declined to comment.

But at least one man who was ensnared by the searches argued that police should be held to the same standards as ordinary citizens.

“We have to follow the laws. Why don’t they?” said Tyrell Johnson, 20, who was identified by a facial recognition search in August as a suspect in the armed robbery of an Austin 7-Eleven, documents show. Johnson said he’s innocent, though prosecutors said in court documents that he bears the same hand tattoo and was seen in a video on social media wearing the same clothing as the person caught on tape committing the crime. He’s awaiting trial.

A spokeswoman for the Austin Police Department said these uses of facial recognition were never authorized by department or city officials. She said the department would review the cases for potential violations of city rules.

“When allegations are made against any department staff, we follow a consistent process,” the spokeswoman said in an emailed statement. “We’ve initiated that process to investigate the claims. If the investigation determines that policies were violated, APD will take the necessary steps.”

The Leander Police Department declined to comment.

Police officers’ efforts to skirt these bans have not been previously reported and highlight the challenge of reining in police use of facial recognition. The powerful but imperfect artificial intelligence technology has played a role in the wrongful arrests of at least seven innocent Americans, six of whom were Black, according to lawsuits each of these people filed after the charges against them were dismissed.

Concerns about the accuracy of these tools - found to be worse when scanning for people of color, according to a 2019 federal study - have prompted a wave of local and state bans on the technology, particularly during the police reforms passed in the wake of the Black Lives Matter protests of 2020.

But enforcing these bans is difficult, experts said, because authorities often conceal their use of facial recognition. Even in places with no restrictions on the technology, investigators rarely mention its use in police reports. And, because facial recognition searches are not presented as evidence in court - legal authorities claim this information is treated as an investigative lead, not as proof of guilt - prosecutors in most places are not required to tell criminal defendants they were identified using an algorithm, according to interviews with defense lawyers, prosecutors and judges.

“Police are using it but not saying they are using it,” said Chesa Boudin, San Francisco’s former district attorney, who said he was wary of prosecuting cases that may have relied on information SFPD obtained in violation of the city’s ban.

Facial recognition algorithms have been used by some police for over a decade to identify criminal suspects. The technology analyzes a “probe image” - taken perhaps from a crime scene photo or surveillance video - and rapidly scans through a database of millions of images to locate faces with similar features. Experts said the technology’s effectiveness can hinge on the quality of the probe image and the cognitive biases of human users, who have the sometimes difficult task of selecting one possible match out of dozens of candidates that may be returned by an algorithm.

The first known false arrest linked to facial recognition was of a Black man in Detroit. His arrest was the subject of an article in the New York Times in June 2020, one month after the murder of George Floyd at the hands of Minneapolis police fueled national protests over policing tactics in minority communities.

That same month, Austin passed its ban on facial recognition, part of a city council resolution that also restricted police use of tear gas, chokeholds, military equipment and no-knock warrants.

“The outcry was so great in that moment,” said Chris Harris, a policy director at the Austin Justice Coalition, a nonprofit civil rights group. A long list of police reforms the community had discussed for years, he said, “suddenly became possible.”

The city council of Jackson, Miss., soon followed suit, saying the technology “has been shown to programmatically misidentify people of color, women, and children: thus supercharging discrimination.” Portland, Maine, passed its own ban, saying “the use of facial recognition and other biometric surveillance would disproportionately impact the civil rights and liberties of people who live in highly policed neighborhoods.”

In all, 21 cities or counties and Vermont have voted to prohibit the use of facial recognition tools by law enforcement, according to the Security Industry Association, a Maryland-based trade group.

Boudin, the former San Francisco district attorney, says he saw evidence SFPD commonly employed a different workaround that gave them plausible deniability: sharing “be on the lookout” fliers containing images of suspects with other police agencies in the Bay Area, who might take it upon themselves to run the photos through their facial recognition software and send back any results.

Sernoffsky, the SFPD spokesman, called Boudin’s claim an “outlandish conspiracy theory,” adding that any assertion that “SFPD routinely engaged in this practice beyond the cases we made public is absolutely false.”

In September 2020, the San Francisco Chronicle reported that SFPD had charged a suspect with illegally discharging a gun after he was identified through a facial recognition search result. The lead was provided by the Northern California Regional Intelligence Center, or NCRIC, a multi-jurisdiction facility serving law enforcement agencies in the region. At the time, SFPD told the Chronicle that it had not asked NCRIC to conduct the search and that it identified the suspect through other means.

Mike Sena, executive director of NCRIC said his analysts always send suspect leads out to agencies whenever they get a hit. “We are not trying to force anyone to violate their policy, but if I identify a potential lead as a murder suspect, we are not going to just sit on it,” Sena said.

In the five cases the police department reported to the city’s Board of Supervisors, two SFPD officers explicitly asked a “state and local law enforcement fusion center” to help them identify robbery, aggravated assault and stabbing suspects in 2020 and 2021, and one of them asked the Daly City Police Department for help identifying a stabbing suspect in 2021. The disclosure was part of an annual report in which SFPD is required to list how it used any surveillance technology.

A Daly City police official said he had no immediate comment.

SFPD told the Board of Supervisors that all five incidents had been examined by SFPD’s internal investigators but did not say whether any disciplinary measures had been taken.

“The SFPD is not taking the facial recognition ban seriously,” said Brian Hofer, executive director of Secure Justice, a watchdog of police surveillance who shared the San Francisco document with The Post. “They have repeatedly violated it and stronger consequences are needed.”

Sernoffsky said SFPD follows all city laws and department policies and thoroughly investigates any accusations of policy violations.

Police departments can be deeply entwined with their law enforcement neighbors, especially when it comes to sharing information that could help catch criminals. The Rev. Ricky Burgess, a former city council member in Pittsburgh, warned his colleagues when they passed a 2020 ban on facial recognition that the measure probably would be ineffective because police frequently collaborate with neighboring and statewide agencies.

“Right now, today, the city of Pittsburgh is using facial recognition through the state of Pennsylvania, and we have no control over it whatsoever,” Burgess said at the time, according to a video of the meeting archived by the public records database Quorum. “This is a bill simply for window dressing.”

A spokesperson for Pittsburgh police said the department does not use facial recognition technology.

Austin’s city council tried to prevent such loopholes. Its resolution prohibits city employees from using facial recognition as well as “information obtained” from the technology. Exceptions for cases of “imminent threat or danger” require approval from the city manager.

After the ban went into effect, Austin police discovered that their colleagues in Leander, a suburb about 30 miles north of Austin, had access to Clearview AI, one of the most popular providers of facial recognition software for police agencies. Clearview’s database includes billions of images scraped from social media and other websites, images which privacy advocates say were collected without appropriate consent.

Between May 2022 and January 2024, Austin police officers emailed Leander police at least six times, explicitly asking them to run photos through facial recognition, documents show. In at least seven other cases, Leander police provided facial recognition results to Austin police even if it wasn’t clear they had been explicitly asked to do. Most of the searches were conducted by one Leander officer, David Wilson, whose name circulated within the ranks of Austin police as someone who could help them run facial searches, emails reviewed by The Post show. Wilson was listed on Leander’s contract with Clearview as the agency’s “influencer” for the technology.

“Hello sir, I was referred to you by our Robbery Unit who advised me you are able to do facial recognition,” one Austin detective wrote in a December 2022 email to Wilson obtained by The Post. “I am working a case where I am trying to identify a suspect and was curious if you might be able to help me out with it.”

Wilson did not respond to requests for comment.

Clearview prohibits law enforcement customers from sharing their access to the platform with anyone outside their agency. But the software lets customers easily export the results of facial recognition searches. Last year, Clearview’s CEO Hoan Ton-That sent an email to customers saying it would begin restricting this feature, citing concerns about too much information sharing hurting the company’s business and increasing the potential for errors.

“Sharing results with others who are not trained on facial recognition usage and best practices may lead to higher chances of mistakes in these investigations,” Ton-That said in the email.

The letter said nothing about sharing Clearview results through more rudimentary methods, such as copying and pasting images into emails - the method Wilson appeared to use several times, even after Clearview said it was clamping down on sharing, emails show.

Clearview did not respond to requests for comment.

- - -

Nate Jones and Jeremy B. Merrill contributed to this report.

Related Content

Fentanyl is fueling a record number of youth drug deaths

In a place with a history of hate, an unlikely fight against GOP extremism

Life in Taiwan is rowdy and proud, never mind China’s threats