Amazon is facing a backlash after reports emerged Tuesday that its “Rekogniton” facial recognition AI program was being used in aggressive facial detection programs by law enforcement, according to Tech Crunch. More than two dozen civil rights organizations are now calling on Amazon not to sell the technology to law enforcement, and 41 activist groups cosigned a letter from the American Civil Liberties Union.
The report, from the ACLU of Northern California, obtained documents that reveal new uses of the technology which they argue raise “profound civil liberties and civil rights concerns.”
The ACLU is right to be concerned. While the new technology may not go far beyond currently available facial recognition programs, its infrastructure offers much larger reach, and the company has been actively recruiting police departments to use the technology at very low cost. While the Amazon technology itself may not be substantially more invasive than other technologies, it threatens to put facial recognition into use on a much wider basis, presenting a crossroads for society to decide its comfort level with such practices.
As Clare Garvie, an associate with Georgetown University Law Center’s Center on Privacy and Technology, notes:
“This raises very real questions about the ability to remain anonymous in public spaces.”
Amazon announced in 2016 that its facial detection technology was being used by the Sheriff’s Office of Washington County, Oregon. The new documents from the ACLU got further, showing how the county is using a database of 300,000 mug shot photos and an app designed to cross-reference faces. They also show that police in Orlando, Florida, are testing the technology to detect persons of interest in public spaces. Amazon’s website says Rekognition can identify as many as 100 of the largest faces in an image, suggesting it can identify people in a crowd.
According to ACLU attorney Matt Cagle, who worked on the report:
“People should be free to walk down the street without being watched by the government. By automating mass surveillance, facial recognition systems like Rekognition threaten this freedom, posing a particular threat to communities already unjustly targeted in the current political climate. Once powerful surveillance systems like these are built and deployed, the harm will be extremely difficult to undo.”
Washington County’s public information officer, Deputy Jeff Talbot, told the Washington Post the program “is not mass surveillance or untargeted surveillance,” and said it does not go far beyond the capabilities of current systems.
Amazon also defended the technology, saying:
“Our quality of life would be much worse today if we outlawed new technology because some people could choose to abuse the technology.”
New policies established by Washington County in 2017 specify that officers can use facial recognition to identify suspects who do not offer identification. One email obtained by the ACLU showed the Sheriff’s Office used the technology to identify theft suspects, unconscious or deceased individuals, and possible witnesses or accomplices.
Some cities, such as Seattle, already prohibit the use of facial recognition technology in police body cameras.
Even for those who may be comfortable with the widespread use of facial detection by law enforcement, the current state of the technology should raise questions. Facial recognition systems have been shown to produce false matches more often when identifying women and people of color. This could result in higher arrest rates and false accusations against already marginalized groups. While some companies have conducted public testing for bias in their systems, Amazon has yet to share any data on such tests, even as they move forward pushing their technology into use by law enforcement nationwide.
While it’s likely that this is a result of the largely white and male data sets used to teach the algorithms, Amazon hasn’t indicated it’s doing anything to address the problem. This is unacceptable by any measure.
Cagle notes:
“We have been shocked at Amazon’s apparent failure to understand the implications of its own product on real people. Face recognition is a biased technology. It doesn’t make communities safer. It just powers even greater discriminatory surveillance and policing.”
Now, Democrats in Congress are voicing grave concerns over the potential for this bias and for civil liberties violations. In an open letter, Democratic representatives Keith Ellison of Minnesota and Emanuel Cleaver of Missouri are calling on Amazon to answer questions about how law enforcement is using their technology.
“The disproportionally high arrest rates for members of the black community make the use of facial recognition technology by law enforcement problematic because it could serve to reinforce this trend,” according to their letter.
Speaking to The Hill, Cleaver said:
“This issue is very simple: Our leading private sector companies should not become for-profit law-enforcement officials. There should be no profit motive whatsoever for companies Americans rely on for everyday goods and services to be able to make money to report those Americans to the police.”
It can easily be argued that Amazon’s moves are generating controversy because of their link to a big name, consumer-driven technology giant. After all, facial recognition is not a brand new technology for use by law enforcement. Amazon may argue, with some legitimacy, that it is being unfairly demonized by the current controversy, as part of a wider backlash against the growing power of Silicon Valley. Yet, the broader question is long overdue. How much surveillance is acceptable? How important is our anonymity? How much freedom are we willing to sacrifice for safety? And do those sacrifices really make us safer in the first place? As technology advances, these questions have never been more pressing.
Leave a Reply