Connect with us

News

US Federal Study Finds Facial-Recognition Systems Racist

Avatar of CTN News

Published

on

A landmark US federal study has revealed facial-recognition systems misidentified people of color more often than white people. Casting new doubts on a rapidly expanding investigative technique widely used by law enforcement agencies.

Asian and African American people were up to 100 times more likely to be misidentified than white men. The study, found a wide range of accuracy and performance between developers’ systems. It also showed that Native Americans had the highest false-positive rate of all ethnicities.

The faces of African American women were also falsely identified more often. Especially in the kinds of searches used for police investigators. Image are compared with thousands or millions of others in hopes of identifying a suspect.

Asians, African Americans, Native Americans and Polynesians

1 69noAb9AW8tI9vAPnfJt9Q

Algorithms developed in the U.S. also showed high error rates for “one-to-one” searches of Asians, African Americans, Native Americans and Polynesians. Such searches are critical to such functions as cellphone sign-ons and airport boarding schemes. Consequently errors could make it easier for impostors to gain access to those systems.

The study found middle-age white men generally benefited from the highest accuracy rates.

The National Institute of Standards and Technology, the federal laboratory known as the NIST that develops standards for new technology, found “empirical evidence.” Even more most of the facial-recognition algorithms exhibit “demographic differentials.” Consequently they can worsen their accuracy based on a person’s age, gender or race.

The study could fundamentally shake one of American law enforcement’s fastest-growing tools for identifying criminals. Privacy advocates have argued is ushering in a dangerous new wave of government surveillance tools.

Discrimination and racial abuse

ap17193571675443

The FBI has logged more than 390,000 facial-recognition searches of state driver-license records and other federal and local databases since 2011, federal records show. But members of Congress this year have voiced anger over the technology’s lack of regulation and its potential for discrimination and abuse.

The federal report confirms previous findings from studies showing similarly staggering error rates. Companies such as Amazon had criticized those studies, saying they reviewed outdated algorithms or used the systems improperly.

One of those researchers, Joy Buolamwini, said the study was a “comprehensive rebuttal” to skeptics of what researchers call “algorithmic bias.”

“Differential performance with a factor of up to 100?!” she told The Washington Post in an email Thursday. The study, she added, is “a sobering reminder that facial recognition technology has consequential technical limitations alongside posing threats to civil rights and liberties.”

Investigators said they did not know what caused the gap but hoped the findings would, as NIST computer scientist Patrick Grother said in a statement, prove “valuable to policymakers, developers and end users in thinking about the limitations and appropriate use of these algorithms.”

Jay Stanley, a senior policy analyst at the American Civil Liberties Union, which sued federal agencies this year for records related to how they use the technology, said the research showed why government leaders should immediately halt its use.

“One false match can lead to missed flights, lengthy interrogations, tense police encounters, false arrests, or worse,” he said. “But the technology’s flaws are only one concern. Face recognition technology – accurate or not – can enable undetectable, persistent, and suspicionless surveillance on an unprecedented scale.”

Amazon did not submit its algorithm for testing

1Amazon Rekognition Service 1 600x297 1

The NIST’s test examined most of the industry’s leading systems, including 189 algorithms voluntarily submitted by 99 companies, academic institutions and other developers. The algorithms form the central building blocks for most of the facial-recognition systems around the world.

The algorithms came from a range of major tech companies and surveillance contractors, including Idemia, Intel, Microsoft, Panasonic, SenseTime and Vigilant Solutions. Notably absent from the list was Amazon, which develops its own software, Rekognition, for sale to local police and federal investigators to help track down suspects.

The NIST said Amazon did not submit its algorithm for testing. The company did not immediately offer comment but has said previously that its cloud-based service cannot be easily examined by the NIST’s test. Amazon founder and chief executive Jeff Bezos owns The Washington Post.

Grother, the NIST lead researcher, said other companies with cloud-based systems had been able to submit their algorithms, including Microsoft, which he said “sent us very capable and very reliable software.” Of Amazon, he added: “Our test remains open if they elect to participate.”

The NIST team tested the systems with about 18 million photos of more than 8 million people, all of which came from databases run by the State Department, the Department of Homeland Security and the FBI.

The arrest of innocent people

facial recognition bill

The test studied both how algorithms work on “one-to-one” matching, used for unlocking phones or verifying a passport. And also “one-to-many” matching, used by police to scan for a suspect’s face. Across sets of driver-license photos. Investigators tested both false negatives, in which the system fails to realize two identical faces are the same. As well as false positives, in which the system identifies two different faces as being the same. A dangerous failure for police, who could end up arresting an innocent person.

Some algorithms produced few errors, but the disparity in accuracy between systems could be enormous. There is no national regulation or standard for facial-recognition algorithms. Local law-enforcement agencies rely on a range of contractors and systems. Also with different accuracies and capabilities. The algorithms themselves – with names like “anyvision-004” and “didiglobalface-001” – are almost entirely unknown to anyone outside the industry.

Algorithms developed in Asian countries had smaller differences in error rates between white and Asian faces. Suggesting a relationship “between an algorithm’s performance and the data used to train it,” the researchers said.

“You need to know your algorithm, know your data and know your use case,” said Craig Watson. “Because that matters.”

Continue Reading

CTN News App

CTN News App

Recent News

BUY FC 24 COINS

compras monedas fc 24

Volunteering at Soi Dog

Find a Job

Jooble jobs

Free ibomma Movies