Amnesty sues NYPD over refusal to disclose facial recognition tech data
Two watchdog groups, Amnesty International and Surveillance Technology Oversight Project (S.T.O.P.) have sued the New York Police Department (NYPD) over its refusal to disclose public records regarding its acquisition of Facial recognition technology (FRT) and other surveillance equipment.
The lawsuit could force the NYPD to disclose its surveillance capabilities. Earlier in September 2020, Amnesty International had filed a request seeking public records about the procurement and functionality of FRT, Drones, and other surveillance technology by NYPD. The Police department however denied the request and later offered some exemptions. However, the civil liberties group considered it inadequate. Hence it filed the present petition to get a court to review NYPD’s decision.
Use of FRT during the Black Lives Matter protest
The executive director of S.T.O.P. Albert Fox Cahn said, “It is outrageous that when the citizens came out to protest regarding abuse of power by police, they were met with more of the abuse. We have no idea how often the department uses the surveillance tools to monitor citizens who are just exercising their First Amendment Rights guaranteed by the U.S. Constitution.”
The rights groups allege that the police highly used FRT during the ‘Black Lives Matter protest. Cahn says that the records have the potential to shed light on the number of arrests made using this technology, how much money is spent on the acquisition of similar equipment, and how much money is being spent on all this.
NYPD Spokesperson Sergeant Jessica McRorie responded to CyberScoopand said, “We will review the lawsuit if and when it is served.”
Earlier in April, an investigative report had suggested that more than 7000 individuals from nearly 2000 public agencies in the US used Clearview AI for various purposes – searching for Black Lives Matter protesters, Capitol insurrectionists, or their own family and friends.
Dependence on Facial Recognition Technology (FRT)
Many Law enforcement agencies around the world are relying heavily on FRT for investigation purposes. According to a report published by the United States Government Accountability Office, many agencies are unable to account for which systems their employees use. This is very worrisome. Various studies have shown the technology is not very reliable on Black and Asian skin tones. Due to similar reasons, Google is working on an alternative color scale.
FRT equipment is often acquired in secret with very little or no oversight. In a different lawsuit by S.T.O.P. against the NYPD, It has disclosed that more than 22,000 Facial recognition searches were conducted between October 2016 to October 2019. Another research study from Amnesty International revealed that the NYPD can track people in Manhattan, Brooklyn, and Bronx by running images from 15,280 surveillance cameras.
Many privacy advocacy groups are pushing the U.S. Congress to enact a law, banning the use of FRT by Federal Law enforcement agencies. Last month The Facial Recognition and Biometric Technology Moratorium Act was reintroduced by Senator Ed Markey and Representative Pramila Jayapal. It would block funding to state and local law enforcement who do not cease use of the tech. The bill would allow cities and states to keep and make their laws.
The European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) have also called for a ban on use of facial recognition in public spaces.
Why excessive use of FRT is Problematic?
It is impossible to say that any technology is 100% full proof of any shortcomings. There are many reasons as to why over-dependence on FRT might cause more harm than good. There are many instances where people’s images are used without their consent. Even one instance of such things is too many.
The most important shortcomings of FRT is, sometimes it leads to false positives. This means the FRT has recognized you as someone you’re not. This happens quite frequently. Subsequently, it can also lead to false negatives. This means it fails to identify who you are. This is very problematic in many cases.
On 5th Feb, Canada banned Clearview AI’s facial recognition service for collecting highly sensitive biometric data without consent. Following this, in mid-February, Sweden’s data watchdog imposed a fine on local police for unlawful use of Clearview AI. UK has also declared facial recognition implementations as illegal.
Do subscribe to our Telegram channel for more resources and discussions on tech-law. To receive weekly updates, don’t forget to subscribe to our Newsletter.