No One Knows 'Black Lives Matter' is Had A Tremendous Impact on IBM and Face Recognition



Facial recognition technology analyses the structure of faces to allow identification of individuals.
Facial recognition technology analyses the structure of faces to allow identification of individuals. Image used with permission. Source: Pikrepo
In the midst of the recent Black Lives Matter protests, which raised questions about policing and racism in the United States and elsewhere, technology company IBM announced its withdrawal from the general-purpose facial recognition market. Could this be a turning point in the use of such technologies—fraught with drawbacks— by security services?
The news has gone almost unnoticed.
On one side of the story lies the bungled COVID-19 health crisis. On the other, protests and anger ignited by shocking police violence, that revived the Black Lives Matter movement worldwide. Caught between the two, IBM declared on June 8, its decision to abandon the marketing of facial recognition software, having been driven by the protest actions into honouring its ethical charter.
“Technology can increase transparency and help the police to protect communities, but must not promote discrimination or racial injustice”, Arving Krishna, the company’s CEO notably commented.
The announcement has garnered few reactions, except from a few political economy analysts, technophiles, and activists. And yet it could indicate major changes in the influence which technology exerts on our lives.

A controversial technology

Facial recognition software permits individuals to be identified from photos or videos in an automated way. To achieve this, it relies on two pillars: a reference dataset of pre-recorded images, and large processing capacity. Both are areas which have seen phenomenal progress recently, owing to innovations in Big Dataand in Artificial Intelligence (AI). Scaling-up facial recognition massively has thus become a possibility.
For several years, examples have mushroomed worldwide. As far back as February 2005, the police department of Los Angeles was using a system developed by General Electric and Hamilton Pacific. It’s a practice which has since become generalised, and accelerated. In 2019, China had a total of 200 million video surveillance cameras on its soil. An even denser network is under preparation in Russia. Not to mention the initiatives of cities like Nice, which is currently testing [fr] the technology, or London, where cameras analyse the faces of passers-by (without informing them) with the aim of locating people being sought by the authorities.
Authorities justify such automated surveillance by security imperatives: in late 2016, the International Criminal Organization, Interpol, claimed the identification of “over 650 felons, fugitives, persons of interest or missing persons […]”. It is all done in the name of the struggle against criminality, terrorism, or more recently the spread of coronavirus.
But as with other advanced technologies, facial recognition is a double-edged sword. The progress it brings is accompanied by threats, particularly to civil liberties.
Several digital rights organizations have been alerting the public opinion to potential threats and abuses enabled by facial recognition. Among them, the Electronic Frontier Foundation (EFF) and La Quadrature du Net. The latter is co-ordinating a campaign called Technopolice [fr], an initiative which logs and exposes automatic surveillance plans in France, and calls for systematic resistance.

Limited reliability

The most prominent harm of using facial recognition tools is bias. These tools identify and verify people based on exposure to sample data: so-called training data sets. If these are incomplete or lacking in relevance, the tool will make poor interpretations. It’s called learning bias.
To demonstrate such bias, last month, Twitter users tested an AI that reconstructs portraits from pixelated images,and published their anomalous results online. There were significantly more failures with Afro-American or Hispanic subjects, such as Barack Obama and Alexandria Ocasio-Cortez. The dataset, which consisted almost solely of portraits of white people, had wrongly nudged the reconstructions in the direction of the most likely typical profiles.
When using AI to identify people from images rather than to enhance them, the analytical process will probably show a similar bias.
Suppose you decided to assess the level of risk of a person’s criminality, based on parameters like age, home address, skin colour, highest academic qualification… and, to train your software, you used data supplied by detention centres, or prisons.
Then it’s highly likely your software will seriously downplay the risks for white people, and elevate them for others.
Source: https://intelligence-artificielle.agency/les-biais/ [fr]
The facts speak for themselves. In London real-time facial recognition shows an 81% error rate; in Detroit, an African-American man spoke up about his wrongful arrest because of a faulty identification.

Disputed legitimacy

Not only is facial recognition fallible, it worsens discrimination, as confirmed by an inquiry by the investigative site ProPublica back in 2016.
Google Photos tagged two African Americans as “gorillas” […] a recruitment aid tool used by Amazon disadvantaged women
See
For police forces, often the object of accusations of discrimination, facial recognition is an extra, highly inflammable element.
The death of George Floyd in an incident of police violence, on May 25, in Minneapolis, sparked a wave of demonstrations in the United States initially, then across the world. They started off denouncing discrimination targeting ethnic minorities. But an escalation of violence led the demonstrators to demand the de-militarisation of law enforcement agencies, chanting the slogan “Defund the police”. By extension, the tools of widespread surveillance have also come under scrutiny, as well as the private companies which provide them. So, now, under pressure from Black Lives Matter activists, the company IBM has announced its partial withdrawal from the facial recognition market.
It is no accident that Big Blue [IBM’s nickname] has been the first to react. The firm has a long, sometimes shameful history, which it had to learn to come to terms with. Back in 1934, it collaborated with the Nazi regime via its German subsidiary. Much later, in 2013, it was implicated in the case of the PRISM surveillance programme, exposed by the whistleblower Edward Snowden. Perhaps for this reason, it has been able to finesse its role in the current conflict between a security-driven state and human rights activists. Of course, it’s possible to come up with a far more rational argument for IBM’s strategy, eager to protect itself from future judicial proceedings and their financial cost.
Nonetheless, the reorientation of its activity is real enough, and it has sparked an initiative which is being followed by other industry giants. Microsoft declared on June 12 that it would refuse to sell its facial recognition technology to police agenciesAmazon, under peer pressure, ended up declaring a moratorium on its Rekognition tool.

A step towards reform?

The need for a statutory framework has become obvious. In his announcement, Arvind Krishna, IBM CEO, called on the US Congress to “open a national dialogue to see whether, and how, facial recognition technologies are to be used by law enforcement agencies.”
This call has been heard. Members of Congress introduced on June 25 a bill to ban the use of facial recognition by the police. A day earlier, the ban was endorsed in the city of Boston by its City Council.
No doubt this is just the start of a long political and legal battle, to bind the use of facial recognition into a procedure respectful of citizens. But for the first time, human rights defence movements seem in a position to influence the Big Tech companies and the political establishment, in the direction of the shared aspiration of a technology profitable for everyone.
GLobal Voices 
A small portrait of RĂ©my Vuong
Written byRĂ©my Vuong
Translated byAdam Long

Comments