The worrying lack of transparency around the UK government’s use of AI is placing them at risk of repeating serious mistakes made by other governments – but the stakes do not need to be this high, writes Policy and Parliamentary Lead Isabelle Agerbak.

This week MPs will be debating the risks and opportunities presented by artificial intelligence (AI). The debate comes just after the consultation on the Government AI White Paper closes. Amid the excitement and trepidation around AI in recent months, a crucial point has been missing from public discourse: the rules for State use of AI may not be the same as those for the private sector.

Those who are responsible for shaping this new world need to be aware that Government use of AI comes with serious risks to fundamental rights and to trust in our democratic institutions. The Government has said it wants the UK to be the “global home of AI safety”, but in fact, we are already falling behind other nations in putting proper protections in place.

The Government’s AI regulation plans are designed for the private sector, but the risks posed by public authority use of AI are unique

The Government’s plans for AI regulation (set out this year in a White Paper)[1] are well and truly “light touch”: there are no new statutory obligations on public authorities (or anyone), no new powers of enforcement for regulators, and no improvement to the existing patchwork of regulation (we have different regulators for different sectors, but no one body with oversight of AI use). The Government says it is open to exploring more statutory measures is just ‘test and learn’, which is another way of saying ‘hurt first, fix later’. While a ‘pro-innovation’ approach may (or may not) be the right way to regulate AI in the private sector, it is not right when it comes to use of AI by public authorities.

Government bodies are already using AI in a host of situations where the stakes are incredibly high

The relationship between individuals and businesses is very different from the relationship between individuals and the state. That is why we have a whole host of laws which specifically apply to public decision-making. Public authorities have all sorts of powers to do things which affect us, powers which businesses do not have: government can deport you, take your child into local authority care, fine you and take away your liberty. These powers come with great risks to fundamental rights.

Government bodies are already using AI in a host of situations where the stakes are incredibly high, such as immigration detention, benefit fraud detection, and sham marriage investigations. We cannot afford to wait and see whether letting public authorities experiment with different uses of AI does damage. We already know the harms it can cause.

Take an example from the Netherlands. The tax authorities there used an AI tool to help identify potential childcare benefit fraud, but the tool made thousands of mistakes.[2] Some wrongly accused families faced financial ruin, and several thousands of children were placed in state custody as a result of the accusations.[3] When victims tried to challenge the decision, they were told that officials could not access the algorithmic inputs, so could not say how decisions were made.[4] The scandal was so serious that the Government had to resign.

Without proper regulations and statutory obligations, we could end up with horror stories like the ones we have seen elsewhere

Ministers with responsibility for implementing and working under a ‘light touch’ approach to AI regulation in the UK could see themselves at the helm of a similar catastrophe. For example, we know – thanks largely to resource-intensive investigations by civil society, rather than Government being up front about it – that the DWP is currently using AI to decide when to investigate benefits claimants. The DWP has already stopped hundreds of thousands of peoples’ benefits on the basis of AI-supported decision-making.[5] When the DWP informs claimants that their benefits are being suspended for investigation, they give little to no information about why their case may have been flagged for investigation,[6] and the department is refusing to make information publicly available about how the system works, such as what inputs it uses to make decisions.[7]

Without proper regulations and statutory obligations, we could end up with horror stories like the ones we have seen elsewhere – children wrongly taken from their families, vulnerable people forced into financial ruin, and trust in democratic institutions degraded.

“Global home of AI safety”? The UK is behind the curve on putting in place the safeguards we need

Prime Minister Rishi Sunak recently said he wants the UK to be the “global home of AI safety”, but when we look at the measures which matter for safety in public use of AI, and compare our progress against that of other nations, we are falling behind. Not only that, but the protections we do have are being stripped back.

Transparency

France’s law for a Digital Republic requires all public bodies to publish a list of the AI tools they use and their rules, even where the AI system is only used to assist decision-making.[8] The EU AI Act also requires a public register of ‘high risk’ AI systems, to which the AI provider would need to give meaningful information about their systems.[9]

In the UK, while Article 22 GDPR, in combination with Articles 12, 13 and 14, places some transparency obligations on public authorities, these obligations only apply when the decision-making is fully automated, so will not apply in lots of cases where AI is being used to assist decision-making. This is a serious omission. Even this weak safeguard is being further diluted by the Data Protection and Digital Information (DPDI) (No 2) Bill, which if passed would mean that Article 22 will apply to even fewer situations.

Solely automated decision-making

Under EU GDPR, individuals have a right not to be subject to a decision that is based solely on automated decision-making that has legal or similarly significant effects (with certain narrow exceptions).[10]

In the UK, under the DPDI Bill, this right will be watered down, so that more decisions about you can be fully automated. Decisions based on, for example, education data, or financial data, will no longer be caught by the prohibition.[11] As we know from the A-level results scandal, automated decisions based on these sorts of data can still profoundly affect people’s lives.

Risk assessment

The Canadian Directive on Automated Decision-Making (DADM) puts obligations on federal institutions which use AI to use to assess the likely impacts on individuals, communities and ecosystems, and to release the results.[12]

In the UK, the Government’s DPDI Bill would remove existing requirements for public authorities to carry out full impact assessments when processing personal data. Instead of a proper consideration of the risks to rights and freedoms of the data subject, if the Bill passes, they would just be required to consider whether the data processing is necessary for a given purpose. Accountability, ownership and auditing

Accountability and auditing

The Canadian DADM also provides that the Government retains the right to access to the system for testing, and to authorise third parties to audit the system – the person responsible for the system must ensure that the software is “delivered to and safeguarded by the department.”

In the UK, there are currently no regulations requiring public bodies to take responsibility for AI systems they acquire from elsewhere, and no provisions for independent auditing of these systems.

We cannot afford to wait: the Government must put proper safeguards in place now

Public Law Project, alongside 30 other civil society organisations, is calling for the Government to put in place legislation to ensure transparency around public authorities’ use of AI, alongside a regulatory framework which would ensure that fundamental rights are protected. Read more about our proposals here.

It is not enough to simply wait and see whether harm will be done, and try to fix it later. We need a better set of rules on Government use of AI, now.

Read our AI white paper consultation response

Read our joint statement on an alternative approach to AI regulation


[1] AI regulation: a pro-innovation approach – GOV.UK (www.gov.uk)

[2]https://cadmus.eui.eu/bitstream/handle/1814/75390/Reclaiming_transparency_contesting_Art_2022.pdf?sequence=1&isAllowed=y

[3] https://www.europarl.europa.eu/doceo/document/O-9-2022-000028_EN.html

[4]https://cadmus.eui.eu/bitstream/handle/1814/75390/Reclaiming_transparency_contesting_Art_2022.pdf?sequence=1&isAllowed=y; https://www.amnesty.nl/content/uploads/2021/10/20211014_FINAL_Xenophobic-Machines.pdf?x64788

[5] See the response to Parliamentary Question UIN 142793 asked to the Department for Work and Pensions by Kate Osamor MP, available at https://questions-statements.parliament.uk/written-questions/detail/2023-02-08/142793

[6] As described by Kate Osamor MP at https://hansard.parliament.uk/commons/2022-01-26/debates/333BCD75-7B81-464A-BA13-91D711B1A4EF/DWPRiskReviewTeam#contribution-4A0104C1-9AE4-4315-9175-998A491DF1E2 and the Work Rights Centre, see:

https://www.independent.co.uk/news/uk/home-news/eu-benefits-universal-credit-dwp-b1977451.html

[7]https://www.whatdotheyknow.com/request/910782/response/2173661/attach/3/Response%20FOI2022%2084395.pdf?cookie_passthrough=1

[8] LOI n° 2016-1321 du 7 octobre 2016 pour une République numérique (1) – Légifrance (legifrance.gouv.fr)

[9] EU AI Act: first regulation on artificial intelligence | News | European Parliament (europa.eu)

[10] https://commission.europa.eu/law/law-topic/data-protection/reform/rules-business-and-organisations/dealing-citizens/are-there-restrictions-use-automated-decision-making_en

[11] See more in our briefing on the Bill: https://publiclawproject.org.uk/content/uploads/2023/04/PLP-Briefing-DPDI-Bill-No.2-Second-Reading-Final-1.pdf

[12] Directive on Automated Decision-Making- Canada.ca