30 civil society groups have warned that Government use of Artificial Intelligence (AI) needs to be brought under control. 

Read the joint statement

Read coverage by the BBC

Campaigners are warning that the Government’s approach, as outlined in its recent white paper on AI regulation, does not properly protect individuals from the risk of unfairness or discrimination when automation is used to make decisions that affect them.

Shameem Ahmad, CEO of the Public Law Project, said: 

“AI is hugely powerful. Chat GPT has caught everyone’s attention, but public authorities have been using this tech for years and right now the Government is behind the curve on managing the risks. 

“We have seen public bodies using this tech in disturbing ways: from the use of discriminatory facial recognition by South Wales police to DWP using AI to investigate benefits claimants.  

“Government use of AI can be secretive, it can backfire, and it can exacerbate existing inequalities, undermining fundamental rights.” 

What does the Alternative AI Whitepaper call for? 

  • Authorities must be transparent about what AI systems they use, how they work and who is affected. 
  • There must always be a human in the loop who is accountable for decisions reached using AI. 
  • Everyone should have access to an effective redress mechanism when AI systems make unlawful or unfair decisions that affect them. A specialist regulator is needed for this. 
  • AI that threatens people’s fundamental rights, such as to privacy and non-discrimination should be prohibited. 

AI features in many Automated Decision Making (ADM) systems that authorities use to make life-affecting decisions, such as exam results, council house allocation, benefit fraud investigations, identifying children at risk of coming into care, and who is entered onto police watch lists. 

Shameem Ahmad, CEO of the Public Law Project, said:  

“Use of AI by authorities offers the promise of greater efficiency and accuracy that could be a boost to public finances and improve the quality of public services.  

“But with power comes responsibility. And yet, as the rest of the world is moving towards more robust protections against the harms posed by AI, the AI White Paper and the Data Protection No.2 Bill represent a failure by Government to seriously engage with the risks. 

“That is why 30 civil society organisations are urging the Government to adopt the principles set out in the Alternative AI Whitepaper. These principles represent the minimum required for people affected by AI – which is all of us – to feel the benefits of this technology whilst being protected from the risks.” 

Read the joint statement

Read coverage by the BBC

Organisations and individuals who have signed the statement include: 


Big Brother Watch 

Just Algorithms Action Group (JAAG) 

Work Rights Centre 


Migrants’ Rights Network 

Helen Mountfield KC 

Monika Sobiecki, Partner, Bindmans LLP 

Dr Derya Ozkul, Senior Research Fellow, Refugee Studies Centre, University of Oxford 

Child Poverty Action Group 

Open Rights Group 

Louise Hooper, Garden Court Chambers 

Dr Oliver Butler, Assistant Professor in Law, University of Nottingham 

Birgit Schippers, University of Strathclyde 

Connected by Data 

Professor Joe Tomlinson, University of York 

Welsh Refugee Council 

Association of Visitors to Immigration Detainees (AVID) 


Dave Weaver, Chair of Operation Black Vote 

Lee Jasper, Blaksox 

Sampson Low, Head of Policy, UNISON 

Shoaib M Khan 

Tom Brake, Director, Unlock Democracy 

Asylum Link Merseyside 

Fair Trials 

Clare Moody, Co-CEO, Equally Ours 

Isobel Ingham-Barrow, CEO, Community Policy Forum 

Jim Fitzgerald, Director, Equal Rights Trust