Sign in

Facial Recognition Technology is (still) Racist

Facial recognition technology is becoming ever more prevalent due to its wide variety of uses. Although many applications are intended to benefit society, such as criminal identification [6], or disease diagnosis [10], the technology remains controversial. From a security and privacy standpoint, its use faces widespread criticism [9, 11], since it has great power to be used in a sinister manner [3]. Moreover, facial recognition technology has historically struggled with racial bias [12, 2, 14]; which can enable discriminatory behaviour against certain ethnic groups. Unfortunately this is still an ongoing issue.

Recently, an article was published in New Scientist, exposing the UK government’s willingness to release software for passport photo checking despite knowing it did not work on dark skin [13]. It is the prior knowledge that makes this situation especially unethical: algorithmic bias may be widespread, but it is the responsibility of the government to be unbiased. With racial hate crimes continuing to rise [5], the government should be attempting to reduce racial disparity, not perpetuate it. The Equality Act 2010 exists to protect individuals from discrimination based on protected characteristics, including race. Discrimination is stated as being “treated unfairly” in any way — this software provides an unequal service and is therefore an example of this, and the government have done nothing to fix it. It should be taken down.

In order to combat racial bias, the algorithms must either be banned, or at least trained with more data on racially diverse faces. Google and Amazon claim to be working on this. These tech giants have formerly made racist faux pas with their facial recognition technology [2, 14]. But their issues are ongoing — a recent story exposes Google’s ethically questionable method of data collection [15]. Dark-skinned individuals were bribed with gift cards in exchange for their face scans, without telling them how they would be used. More concerningly, vulnerable individuals such as homeless people were targeted, and pressured into signing a consent form without reading it. There is not widespread acceptance of the technology due to its issues with fairness and privacy, resulting in bans in several US cities. This lack of acceptance further increases the immorality of the bribery and deceit of vulnerable people to hand over their biometric data.

Meanwhile, earlier this year, Amazon’s software was uncovered as having racial bias [14]. It disproportionately misidentified people of colour in a mugshot database. Worryingly, this software has been marketed and sold to police [16] to aid in criminal identification, which can have serious consequences. This is a terrifying fact, and should be banned until it is no longer racist.

In the UK, there have also been several stories regarding the use of facial recognition software by police [7,6]. Recently, a thinktank hired by the government [4] revealed a paper [1] containing descriptions of how human biases lead to biases in algorithms, including those used for predictive policing. There are no laws controlling facial recognition use, and the police have recently won a case [8] allowing it. This should not be allowed to continue; the proof is there.

These are only a small selection of recent incidents. One thing is clear: facial recognition technology is still racist. It should not be used unless it can be fixed ethically and imminently.

References

[1] Alexander Babuta, Marion Oswald. (2019). Data Analytics and Algorithmic Bias in Policing. Briefing Papers: Royal United Services Institute for Defence and Security Studies.

[2] BBC News. (2015). Google apologises for Photos app’s racist blunder. Available: https://www.bbc.co.uk/news/technology-33347866. Last accessed 28th Oct 2019.

[3] Darren Byler. (2019). China’s hi-tech war on its Muslim minority.Available: https://www.theguardian.com/news/2019/apr/11/china-hi-tech-war-on-muslim-minority-xinjiang-uighurs-surveillance-face-recognition. Last accessed 28th Oct 2019.

[4] Jamie Grierson. (2019). Predictive policing poses discrimination risk, thinktank warns. Available: https://www.theguardian.com/uk-news/2019/sep/16/predictive-policing-poses-discrimination-risk-thinktank-warns. Last accessed 28th Oct 2019.

[5] Home Office (2019). Hate Crime, England and Wales, 2018/19.

[6] Donna Lu. (2019). UK court backs police use of face recognition, but fight isn’t over. Available: https://institutions.newscientist.com/article/2215468-uk-court-backs-police-use-of-face-recognition-but-fight-isnt-over/. Last accessed 28th Oct 2019.

[7] Frances Perraudin. (2019). Facial recognition must not introduce gender or racial bias, police told. Available: https://www.theguardian.com/technology/2019/may/29/facial-recognition-must-not-introduce-gender-or-racial-bias-police-told. Last accessed 28th Oct 2019.

[8] Jenny Rees. (2019). South Wales Police use of facial recognition ruled lawful. Available: https://www.bbc.co.uk/news/technology-33347866. Last accessed 28th Oct 2019.

[9] Ian Sample. (2019). What is facial recognition — and how sinister is it?.Available: https://www.theguardian.com/technology/2019/jul/29/what-is-facial-recognition-and-how-sinister-is-it. Last accessed 28th Oct 2019.

[10] Hazel Tang. (2019). Facial Recognition and Medicine. Available: https://ai-med.io/facial-recognition-and-medicine/?fbclid=IwAR2TeSEEcz0hzucSiNes81gO3JIwjoY2i-lPHm8erYbYvXIUxBtoWS4-cRY. Last accessed 28th Oct 2019.

[11] Josh Taylor. (2019). Plan for massive facial recognition database sparks privacy concerns. Available: https://www.theguardian.com/technology/2019/sep/29/plan-for-massive-facial-recognition-database-sparks-privacy-concerns. Last accessed 28th Oct 2019.

[12] Ian Tucker. (2017). ‘A white mask worked better’: why algorithms are not colour blind. Available: https://www.theguardian.com/technology/2017/may/28/joy-buolamwini-when-algorithms-are-racist-facial-recognition-bias. Last accessed 28th Oct 2019.

[13] Adam Vaughan. (2019). UK launched passport photo checker it knew would fail with dark skin. Available: https://institutions.newscientist.com/article/2219284-uk-launched-passport-photo-checker-it-knew-would-fail-with-dark-skin/. Last accessed 28th Oct 2019.

[14] James Vincent. (2019). Gender and racial bias found in Amazon’s facial recognition technology (again). Available: https://www.theverge.com/2019/1/25/18197137/amazon-rekognition-facial-recognition-bias-race-gender. Last accessed 28th Oct 2019

[15] Julia Carrie Wong. (2019). Google reportedly targeted people with ‘dark skin’ to improve facial recognition. Available: https://www.theguardian.com/technology/2019/oct/03/google-data-harvesting-facial-recognition-people-of-color. Last accessed 28th Oct 2019.

[16] Julia Carrie Wong. (2018). ‘Recipe for authoritarianism’: Amazon under fire for selling face-recognition software to police. Available: https://www.theguardian.com/technology/2018/may/22/amazon-rekognition-facial-recognition-police. Last accessed 28th Oct 2019.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store