IBM ends facial recognition work over bias concerns

The company's actions on racial equity include condemning tech that violates human rights or could be misused by police.

With its Watson platform, IBM became one of the leaders in the use of AI for business.

But out of fears about its susceptibility to bias and potential for misuse, the company has ended its use of facial recognition technology as part of a broader pledge to address bias and racial inequality.

In a letter sent to the U.S. Congress and subsequently posted to IBM’s website yesterday, CEO Arvind Krishna stated that he would like his company to work with the government “in the pursuit of justice and racial equity.”

A key part of that pledge is the responsible use of technology, which Krishna said includes no longer offering or researching facial recognition and analysis software. His letter also condemned the use of any technology, including those from its vendors, for the purposes of “mass surveillance, racial profiling” or any other violations of human rights.

“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” Krishna said. He did not say, however, that AI should not be used by law enforcement; rather, the technology needs to be tested for bias and to have those tests be regularly audited. IBM has previously offered AI and analytics services to law enforcement, ranging from European Union countries to the Edmonton Police Service.

IBM’s Visual Recognition tool, available through its Watson AI platform, has previously promoted its ability to detect and recognize faces, alongside other images of things like food, pets and furniture. The company has not said if or how it plans to remove facial recognition from other visual recognition capabilities.

The issue of bias in facial recognition and AI has been raised in the past as the technology has become more wide-spread. In a study by the National Institute of Standards and Technology published late last year, 189 of the most widely used facial recognition algorithms from 99 organizations were tested – finding that most of them had their accuracy impacted by factors like race, age or gender.

Issues about accuracy and privacy in AI have also spread outside of the marketing and business sectors and into law enforcement as tech companies begin to tap more police departments as clients, such as Amazon with its Rekognition platform. In January, company Clearview AI caused controversy after a New York Times report revealed that it had used images users had posted to social media to create a facial recognition database of over three billion images, against the platforms’ terms of use and without the knowledge of the user. The company faced further controversy over the fact that it sold that data to both private businesses and law enforcement, which has been the subject of numerous cease-and-desist orders and privacy lawsuits.

Ending its facial recognition work and pushing for more responsible use of technology was just one pillar Krishna laid out for IBM in the letter, urging congress to push through further police reforms and reinforcing its commitment to educational opportunities that would help communities of colour.