If the UK Government wants public sector use of AI and automation to be underpinned by ‘trust and transparency’, it should learn lessons from the approaches taken elsewhere in the world and introduce key legal requirements, according to a new report from the Public Law Project (PLP). 

Read the report

By comparing the reported effectiveness of transparency requirements from Canada, the USA, France, Japan, and the EU, ‘Securing meaningful transparency of public sector use of AI’ provides key recommendations for developing equivalent regulation in the UK.

The current UK framework is lacking in robust and legally enforceable transparency requirements for AI use in the public sector. The report, written by Mia Leslie with Caroline Selman and Fieldfisher, recommends that the UK address this by:

  • Introducing statutory requirements for public bodies to publish information on the Government’s Algorithmic Transparency Recording Standard (ATRS) Hub
  • Mentioning the ATRS whenever people interact with or are impacted by an AI, algorithmic or automated tool or system
  • Notifying individuals of the presence of automation in decision making when they are affected by those decisions
  • Proactively providing explanations to individuals about how the systems work and how decisions are reached.

PLP’s Caroline Selman said:

“Public bodies are increasingly using automation, including AI, to make or inform decisions about our rights and entitlements which would previously have been made by humans. At PLP, we are aware of 75 algorithms but only nine are listed on the Government’s Algorithmic Transparency Recording Standard (ATRS), which is supposed to be mandatory for government departments.

 

“If the Government wants to avoid a widespread miscarriage of justice like the Horizon scandal, we need systemic and individual transparency. Anyone should be able to find out what systems are being used right now through publicly available information. Individuals deserve to be directly told when an algorithm is being used to make decisions that can significantly affect their lives.”

In recent years, shadowy algorithms have been used to help make key decisions across areas like health, education, immigration, welfare benefits, policing, and prisons. For example, individuals may be flagged for investigation by the DWP because a machine decided that they seemed likely to commit benefit fraud.

The Government has announced an upcoming AI Bill that will seek to regulate those working to develop the most powerful artificial intelligence models. But focusing solely on the development of frontier AI rather than public sector use would be a missed opportunity to improve the status quo and increase Government transparency and fairness, warns PLP.

PLP’s Caroline Selman said:
“The new Government has an opportunity to place trust and transparency at the heart of its approach to public sector algorithms from the outset. We encourage them not to take a ‘hurt first, fix later’ approach, like the previous Government, but instead, demonstrate leadership by getting regulation right the first time around.”

Read the report