Wired reports that police in northern California asked Parabon NanoLabs to run a DNA sample from a cold case murder scene to identify the culprit. Police have often run DNA against the vast database of genealogical tests, cracking cold cases like the Golden State Killer, who murdered at least 13 people.
But what Parabon NanoLabs did for the police in this case was something entirely different. The company produced a 3D rendering of a “predicted face” based on the genetic instructions encoded in the sample’s DNA. The police then ran it against facial recognition software to look for a match. Scientists are skeptical that this is an effective tool given that Parabon’s methods have not been peer-reviewed. Even the company’s director of bioinformatics, Ellen Greytak, told Wired that such face predictions are closer in accuracy to a witness description rather than the exact replica of a face. With the DNA being merely suggestive – Greytak jokes that “my phenotyping can tell you if your suspect has blue eyes, but my genealogist can tell you the guy’s address” – the potential for false positives is enormous. Police multiply that risk when they run a predicted face through the vast database of facial recognition technology (FRT) algorithms, technology that itself is far from perfect. Despite cautionary language from technology producers and instructions from police departments, many detectives persist in mistakenly believing that FRT returns matches. Instead, it produces possible candidate matches arranged in the order of a “similarity score.” FRT is also better with some types of faces than others. It is up to 100 times more likely to misidentify Asian and Black people than white men. The American Civil Liberties Union, in a thorough 35-page comment to the federal government on FRT, biometric technologies, and predictive algorithms, noted that defects in FRT are likely to multiply when police take a low-quality image and try to brighten it, or reduce pixelation, or otherwise enhance the image. We can only imagine the Frankenstein effect of mating a predicted face with FRT. As PPSA previously reported, rights are violated when police take a facial match not as a clue, but as evidence. This is what happened when Porcha Woodruff, a 32-year-old Black woman and nursing student in Detroit, was arrested on her doorstep while her children cried. Eight months pregnant, she was told by police that she had committed recent carjackings and robberies – even though the woman committing the crimes in the images was not visibly pregnant. Woodruff went into contractions while still in jail. In another case, local police executed a warrant by arresting a Georgia man at his home for a crime committed in Louisiana, even though the arrestee had never set foot in Louisiana. The only explanation for such arrests is sheer laziness, stupidity, or both on the part of the police. As ACLU documents, facial recognition forms warn detectives that a match “should only be considered an investigative lead. Further investigation is needed to confirm a match through other investigative corroborated information and/or evidence. INVESTIGATIVE LEAD, NOT PROBABLE CAUSE TO MAKE AN ARREST.” In the arrests made in Detroit and Georgia, police had not performed any of the rudimentary investigative steps that would have immediately revealed that the person they were investigating was innocent. Carjacking and violent robberies are not typically undertaken by women on the verge of giving birth. The potential for replicating error in the courtroom would be multiplied by showing a predicted face to an eyewitness. If a witness is shown a predicted face, that could easily influence the witness’s memory when presented with a line-up. We understand that an investigation might benefit from knowing that DNA reveals that a perp has blue eyes, allowing investigators to rule out all brown- and green-eyed suspects. But a predicted face should not be enough to search through a database of innocent people. In fact, any searches of facial recognition databases should require a warrant. As technology continues to push the boundaries, states need to develop clear procedural guidelines and warrant requirements that protect constituents’ constitutional rights. Woman, Eight Months Pregnant, Arrested for Carjacking and RobberyPPSA has long followed the dysfunctionality of facial recognition technology and police overreliance on it to identify suspects. As we reported in January, three common facial recognition tools failed every time they were confronted with images of 16 pairs of people who resembled one another.
This technology is most apt to make mistakes with people of color and with women. Stories had piled up about Americans – overwhelmingly Black men – who have been mistakenly arrested, including one Georgia man arrested and held for a week for stealing handbags in Louisiana. That innocent man had never set foot in Louisiana. Now we have the first woman falsely arrested for the crime of resembling a scofflaw. Porcha Woodruff, a 32-year-old Black woman and nursing student in Detroit, was arrested at her doorstep while her children cried. Woodruff, eight months pregnant, was told by police that she was being arrested for a recent carjacking and robbery. “I was having contractions in the holding cell,” Woodruff told The New York Times’ Kashmir Hill. “My back was sending me sharp pains. I was having spasms.” After being released on bond, Woodruff had to go straight to the hospital. The obvious danger of this technology is that it tends to misidentify people, a problem exacerbated by distinctly lazy investigations by police. We see a larger danger: as public and private cameras are increasingly networked, and law enforcement agencies can fully track our movements, this technology will mistakenly put some Americans at the scene of a crime. And if the technology improves and someday works flawlessly? We can be assured of being followed throughout our day – who we meet with, where we worship or engage in political activity or protest – with perfect accuracy. In “A Scanner Darkly,” a 2006 film based on a Philip K. Dick novel, Keanu Reeves plays a government undercover agent who must wear a “scramble suit” – a cloak that constantly alters his appearance and voice to avoid having his cover blown by ubiquitous facial recognition surveillance.
At the time, the phrase “ubiquitous facial recognition surveillance” was still science fiction. Such surveillance now exists throughout much of the world, from Moscow, to London, to Beijing. Scramble suits do not yet exist, and sunglasses and masks won’t defeat facial recognition software (although “universal perturbation” masks sold on the internet purport to defeat facial tracking). Now that companies like Clearview AI have reduced human faces to the equivalent of personal ID cards, the proliferation of cameras linked to robust facial recognition software has become a privacy nightmare. A year ago, PPSA reported on a technology industry presentation that showed how stationary cameras could follow a man, track his movements, locate people he knows, and compare all that to other data to map his social networks. Facial recognition doesn’t just show where you went and what you did: it can be a form of “social network analysis,” mapping networks of people associated by friendship, work, romance, politics, and ideology. Nowhere is this capability more robust than in the People’s Republic of China, where the surveillance state has reached a level of sophistication worthy of the overused sobriquet “Orwellian.” A comprehensive net of data from a person’s devices, posts, searches, movements, and contacts tells the government of China all it needs to know about any one of 1.3 billion individuals. That is why so many civil libertarians are alarmed by the responses to an ACLU Freedom of Information (FOIA) lawsuit. The Washington Post reports that government documents released in response to that FOIA lawsuit show that “FBI and Defense Department officials worked with academic researchers to refine artificial-intelligence techniques that could help in the identification or tracking of Americans without their awareness or consent.” The Intelligence Advanced Research Projects agency, a research arm of the intelligence community, aimed in 2019 to increase the power of facial recognition, “scaling to support millions of subjects.” Included in this is the ability to identify faces from oblique angles, even from a half-mile away. The Washington Post reports that dozens of volunteers were monitored within simulated real-world scenarios – a subway station, a hospital, a school, and an outdoor market. The faces and identities of the volunteers were captured in thousands of surveillance videos and images, some of them captured by drone. The result is an improved facial recognition search tool called Horus, which has since been offered to at least six federal agencies. An audit by the Government Accountability Office found in 2021 that 20 federal agencies, including the U.S. Post Office and the Fish and Wildlife Service, use some form of facial recognition technology. In short, our government is aggressively researching facial recognition tools that are already used by the Russian and Chinese governments to conduct the mass surveillance of their peoples. Nathan Wessler, deputy director of the ACLU, said that the regular use of this form of mass surveillance in ordinary scenarios would be a “nightmare scenario” that “could give the government the ability to pervasively track as many people as they want for as long as they want.” As we’ve said before, one does not have to infer a malevolent intention by the government to worry about its actions. Many agency officials are desperate to catch bad guys and keep us safe. But they are nevertheless assembling, piece-by-piece, the elements of a comprehensive surveillance state. Facial recognition technology has proven to be useful but fallible. It relies on probabilities, not certainties, algorithms measuring the angle of a nose or the tilt of an eyebrow. It has a higher chance of misidentifying women and people of color. And in the hands of law enforcement, it can be a dangerous tool for mass surveillance and wrongful arrest.
It should come as no surprise, then, that police mistakenly arrested yet another man using facial recognition technology. Randall Reid, a Black man in Georgia, was recently arrested and held for a week by police for allegedly stealing $10,000 of Chanel and Louis Vuitton handbags in Louisiana. Reid was traveling to a Thanksgiving dinner with his mother when he was arrested three states and seven hours away from the scene of the crime. Despite Reid’s claim he’d never even been to Louisiana, facial recognition software identified Reid as a suspect in the theft of the luxury purses. That was all the police needed to hold him for close to a week in jail, according to The New Orleans Advocate. Gizmodo reports, “numerous studies show the technology is especially inaccurate when identifying people of color and women compared to identifications of white men. Some law enforcement officials regularly acknowledge this fact, saying facial recognition is only suitable to generate leads and should never be used as the sole basis for arrest warrants. But there are very few rules governing the technology. Cops often ignore that advice and take face recognition at face value.” When scientists tested three facial recognition tools with 16 pairs of doppelgangers – people with extraordinary resemblances – the computers found all of them to be a match. In the case of Reid, however, he was 40 pounds lighter than the criminal caught on camera. In Metairie, the New Orleans suburb where Reid was accused of theft, law enforcement officials can use facial recognition without legal restriction. In most cases, “prosecutors don’t even have to disclose that facial recognition was involved in investigations when suspects make it to court.” Elsewhere in Louisiana, there is no regulation. A state bill to restrict use of facial recognition died in 2021 in committee. Some localities use facial recognition just to generate leads. Others take it and run with it, using it more aggressively to pursue supposed criminals. As facial recognition technology proliferates, from Ring cameras to urban CCTVs, states must put guardrails around the use of this technology. If facial recognition tech is to be used, it must be one tool for investigators, not a sole cause for arrest and prosecution. Police should use other leads and facts to generate probable cause for arrest. And legal defense must always be notified when facial recognition technology was used to generate a case. It may be decades before the technical flaws in facial recognition are resolved. Even then, we should ensure that the technology is closely governed and monitored. Facial recognition software is a problem when it doesn’t work. It can conflate the innocent with the guilty if the two have only a passing resemblance. In one test, it identified 27 Members of Congress as arrested criminals. It is also apt to work less well on people of color, leading to false arrests.
But facial recognition is also problem when it does work. One company, Vintra, has software that follows a person camera by camera to track any person he or she may interact with along the way. Another company, Clearview AI, identifies a person and creates an instant digital dossier on him or her with data scrapped from social media platforms. Thus, facial recognition software does more than locate and identify a person. It has the power to map relationships and networks that could be personal, religious, activist, or political. Major Neill Franklin (Ret.) Maryland State Police and Baltimore Police Department, writes that facial recognition software has been used to violate “the constitutionally protected rights of citizens during lawful protest.” False arrests and crackdowns on dissenters and protestors are bound to result when such robust technology is employed by state and local law enforcement agencies with no oversight or governing law. The spread of this technology takes us inch by inch closer to the kind of surveillance state perfected by the People’s Republic of China. It is for all these reasons that PPSA is heartened to see Rep. Ted Lieu join with Reps. Shelia Jackson Lee, Yvette Clark and Jimmy Gomez on Thursday to introduce the Facial Recognition Act of 2022. This bill would place strong limits and prohibitions on the use of facial recognition technology (FRT) in law enforcement. Some of the provisions of this bill would:
The introduction of this bill is the result of more than a year of hard work and fine tuning by Rep. Lieu. This bill deserves widespread recognition and bipartisan support. |
Categories
All
|