The Government’s ‘hurt first, fix later’ approach must be replaced with robust regulation that protects our fundamental rights. How? Read our consultation response to the recent white paper on AI regulation to find out.

Read our consultation response

Public authorities are increasingly using artificial intelligence (AI) and automated decision making (ADM) in a wide range of areas, including immigration, welfare benefits, policing, and prisons, that have the potential to impact significantly on the lives of our beneficiaries. Underregulating this area of AI usage will impede the state’s ability to effectively prevent, or even mitigate, the very risks it identifies.

The ‘pro-innovation’ focus of the proposed AI regulation has led to an unnecessarily ‘light-touch’ approach to the regulation of public authority use of AI and ADM.

In addition, the Government are willing to accept that AI may “cause and amplify discrimination that results in, for example, unfairness in the justice system [or] risks to our privacy and human dignity, potentially harming our fundamental liberties.”

We urge the Government to take a more hands-on approach now, and introduce regulation on a statutory footing to ensure it has the necessary capability to effectively prevent AI-related harms.

We urge the Government to introduce the following:

  • Mandatory disclosure requirements for public authorities who use AI to make decisions partially or entirely.


  • A focus on fortifying existing safeguards and ensuring clarity and coherence between existing laws to improve routes to contest or seek redress.


  • New substantive legal obligations to ensure that public sector use of AI takes due precaution of the associated risks. Currently individuals are put at great risk by taking an entirely reactive ‘hurt first, fix later’ approach to regulating public sector use of AI.


  • A specialist regulator to ensure people can seek redress when things go wrong. The regulatory framework proposed in the white paper will inherit the gaps in existing UK regulation and therefore will not adequately cover all high impact areas and uses of AI. A new regulator must be adequately resourced, have oversight of the entire AI landscape and have the right tools to enforce the regulatory regime.


  • Specific obligations via statute that require adherence from public authorities developing, deploying, and operating AI to fairly and effectively allocate legal responsibility for AI use across the public sector. The legal framework should seek to bring coherence and clarity by building upon and working with existing data protection safeguards and our public law framework.

Further reading:

Our full consultation response

Our briefing for the House of Commons debate on AI

‘How not to be the face of an AI catastrophe’ by Policy and Parliamentary Lead Isabelle Agerbak

Our joint statement: ‘Key principles for an alternative AI White Paper’