IBM will no longer provide facial recognition technology to police departments.
Arvind Krishna, IBM's chief executive, wrote that such technology could be used by police to violate "basic human rights and freedoms," and that would be out of step with the company's values.
"We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies," Krishna said.
Artificial-intelligence researchers and technology scholars continue to warn about facial recognition software, particularly how some of the data-driven systems have been shown to be racially biased. MIT Media Lab has found that the technology is often less successful at identifying the gender of darker-skinned faces, which could lead to misidentifications.
The ACLU found in 2018 that the software mistakenly identified 28 members of Congress as people who had been arrested for crimes.
Amazon has publicly defended its facial recognition software, saying studies challenging its accuracy have contained misperceptions about how the technology operates.
"We know that facial recognition technology, when used irresponsibly, has risks," wrote Matt Wood, general manager of artificial intelligence at Amazon Web Services. "But we remain optimistic about the good this technology will provide in society, and are already seeing meaningful proof points with facial recognition helping thwart child trafficking, reuniting missing kids with parents, providing better payment authentication, or diminishing credit card fraud.
Big Tech's use of facial recognition has sparked controversy and legal action for useage beyond law enforcement.
Facebook agreed to pay half a billion dollars to settle a class action lawsuit for allegedly violating Illinois consumer privacy laws in its use of facial recognition technology that used face-matching software to guess who appears in photos posted to the social network.
Photos by Getty Images