No One Knows 'Black Lives Matter' is Had A Tremendous Impact on IBM and Face Recognition
In the midst of the recent Black Lives Matter protests, which raised questions about policing and racism in the United States and elsewhere, technology company IBM announced its withdrawal from the general-purpose facial recognition market. Could this be a turning point in the use of such technologies—fraught with drawbacks— by security services?
The news has gone almost unnoticed.
On one side of the story lies the bungled COVID-19 health crisis. On the other, protests and anger ignited by shocking police violence, that revived the Black Lives Matter movement worldwide. Caught between the two, IBM declared on June 8, its decision to abandon the marketing of facial recognition software, having been driven by the protest actions into honouring its ethical charter.
“Technology can increase transparency and help the police to protect communities, but must not promote discrimination or racial injustice”, Arving Krishna, the company’s CEO notably commented.
The announcement has garnered few reactions, except from a few political economy analysts, technophiles, and activists. And yet it could indicate major changes in the influence which technology exerts on our lives.
A controversial technology
Facial recognition software permits individuals to be identified from photos or videos in an automated way. To achieve this, it relies on two pillars: a reference dataset of pre-recorded images, and large processing capacity. Both are areas which have seen phenomenal progress recently, owing to innovations in Big Dataand in Artificial Intelligence (AI). Scaling-up facial recognition massively has thus become a possibility.
For several years, examples have mushroomed worldwide. As far back as February 2005, the police department of Los Angeles was using a system developed by General Electric and Hamilton Pacific. It’s a practice which has since become generalised, and accelerated. In 2019, China had a total of 200 million video surveillance cameras on its soil. An even denser network is under preparation in Russia. Not to mention the initiatives of cities like Nice, which is currently testing [fr] the technology, or London, where cameras analyse the faces of passers-by (without informing them) with the aim of locating people being sought by the authorities.
Authorities justify such automated surveillance by security imperatives: in late 2016, the International Criminal Organization, Interpol, claimed the identification of “over 650 felons, fugitives, persons of interest or missing persons […]”. It is all done in the name of the struggle against criminality, terrorism, or more recently the spread of coronavirus.
But as with other advanced technologies, facial recognition is a double-edged sword. The progress it brings is accompanied by threats, particularly to civil liberties.
Several digital rights organizations have been alerting the public opinion to potential threats and abuses enabled by facial recognition. Among them, the Electronic Frontier Foundation (EFF) and La Quadrature du Net. The latter is co-ordinating a campaign called Technopolice [fr], an initiative which logs and exposes automatic surveillance plans in France, and calls for systematic resistance.
Limited reliability
The most prominent harm of using facial recognition tools is bias. These tools identify and verify people based on exposure to sample data: so-called training data sets. If these are incomplete or lacking in relevance, the tool will make poor interpretations. It’s called learning bias.
To demonstrate such bias, last month, Twitter users tested an AI that reconstructs portraits from pixelated images,and published their anomalous results online. There were significantly more failures with Afro-American or Hispanic subjects, such as Barack Obama and Alexandria Ocasio-Cortez. The dataset, which consisted almost solely of portraits of white people, had wrongly nudged the reconstructions in the direction of the most likely typical profiles.
When using AI to identify people from images rather than to enhance them, the analytical process will probably show a similar bias.
Comments