digital courts

Analysis: Governments cannot ignore risk of algorithmic discrimination

Writing in the Oxford University Commonwealth Law Journal, PLP researchers Dr Joe Tomlinson and Jack Maxwell say the trend is that courts expect governments to take responsibility for identifying algorithmic discrimination, not the general public.

Read: Proving algorithmic discrimination in government decision-making

Public bodies in the United Kingdom increasingly are using algorithms and big data to make decisions. As government algorithms become more prevalent, so too do risks of algorithmic discrimination. But it can be very difficult to establish that a specific algorithm is in fact discriminatory.

These issues were at the centre of the Court of Appeal’s recent decision regarding police use of facial recognition technology: R (Bridges) v South Wales Police [2020] EWCA Civ 1058. Facial recognition is notoriously bad at identifying particular groups of people, in particular women and people of colour. The Court held that, when public bodies are considering using this kind of technology, they must proactively test for and address risks of algorithmic discrimination. They cannot escape this duty by choosing not to gather evidence, or by relying on broad assurances from a private manufacturer.

Our Research Team has published an article on the Bridges decision and its implications for algorithmic discrimination in government. We suggest that Bridges, alongside recent decisions in Canada and the Netherlands, forms part of a broader trend: the courts are placing the burden of testing and reviewing potentially discriminatory algorithms on government, rather than the general public.