Government should be under a legal duty to be upfront about when it uses AI to make decisions that affect people’s lives – such as in education, health or welfare – according to a group of over 30 civil society organisations, academics, legal professionals, think tanks and unions. 

Read the joint letter here

As Government uses AI and algorithms to make more and more decisions that affect people’s daily lives, from A-levels to Universal Credit, the group has asked Secretary of State for Science, Innovation and Technology Michelle Donelan to amend the Data Protection and Digital Information Bill currently in the House of Lords. 

A change in the law is urgently needed if Government wants to build public trust in how it uses technology and avoid catastrophes such as the Australian Robodebt and Dutch welfare scandals that harmed tens of thousands of people.  

Shameem Ahmad, CEO of the Public Law Project said:  

“Any decision-making system, AI or human, will at some point go wrong and treat someone unfairly; whether that affects their A-level results, the time they must wait for an organ transplant or their access to vital welfare payments. The difference at the moment is that public bodies can get away with using automated tools in the shadows. Without transparency, there can be no accountability or trust in the systems that govern us. 

“If Government is serious about using this technology for the good of everyone, it must urgently make sure public sector AI use is transparent. Knowing how automated decisions are being made is the first step in seeking redress and putting things right if they go wrong. 

 “AI has great potential, but scandals in Australia and the Netherlands show that these systems can make incorrect decisions at speed, inflicting great harm on a huge number of individuals. Given the stakes, government must take this opportunity and act now.” 

The letter says: ‘The speed and volume of decision-making that new technologies will deliver is unprecedented. Their introduction creates the potential for decisions to be made more efficiently and at lower costs. However, if the use of these systems is opaque, they cannot be properly scrutinised and those operating them cannot be held accountable.’ 

How transparent is Government about how it uses AI? 

  • Public bodies can voluntarily publish information on how they use algorithms under the Algorithm Transparency Recording Standard (ATRS) which was created in 2021.  
  • Since its inception, only 7 transparency reports have been released. 
  • Many of the key government departments using tools that fall within the scope of the ATRS, such as the Home Office and Department for Work and Pensions, have never submitted a report. 
  • The Tracking Automated Government register was launched in 2023 by the Public Law Project. It currently lists 55 automated tools used by public authorities. The TAG register was not built by government; it was pieced together by Public Law Project from information, hard-won through investigations by journalists, civil society organisations and academics.  

Recent changes to the ATRS don’t go far enough

As part of the Government’s response to its AI regulation White Paper consultation, it announced that the ATRS will become a ‘requirement’ for government departments using algorithmic and automated tools, but this requirement will not be on a statutory footing; it will be in guidance only. 

The letter to Secretary of State Michelle Donelan says an explicit legal obligation is needed: 

‘Such a duty is proportionate to the nature and impact of the risk posed by the widespread and fast-growing use of AI and algorithmic tools and will ensure that public authorities can be held accountable for failure to comply with the duty.’ 

Read the joint letter here