As the public inquiry into the Horizon IT system’s failings unfolded, the most senior civil servant in the Department for Work and Pensions (DWP) told MPs that he “really hopes” their growing use of machine learning will not replicate the Post Office’s scandal.

If the UK Government wants to prevent a widespread miscarriage of justice caused by their use of AI and automation, it will take more than hope. 

Here are 5 lessons the Government should take to avoid another failure.

1. Humans must not over-rely on machines  

The Horizon system was not working properly but it was still treated as if it was infallible, despite Post Office investigators accepting that there were flaws. 

The DWP currently uses machine learning to flag who may be attempting to commit benefit fraud. The department has already admitted that there is an inherent risk of bias in how it selects claims for review – and that their ability to monitor the bias of these models is ‘limited’ due to the lack of data they collect on protected characteristics. 

If that algorithm is indeed biased, innocent people could be wrongly flagged for investigation due to the DWP’s trust in a flawed system. 

Lesson 1 – Clear processes are needed to ensure that public sector use of AI takes due precaution of the associated risks – including having the data they need to evaluate (and prevent) potential harm and to quickly respond if injustices occur.

2. Humans need to understand the machines  

During the inquiry, one Post Office investigator said he wasn’t “technically minded” enough to question the software. 

There needs to be human involvement in, and understanding of, computer-generated outputs and decisions (such as those produced by the DWP fraud system) to check that the results are accurate and fair.  

However that human involvement also needs to be meaningful. When people use the machines to make important decisions, just having a human in the loop might not be enough to prevent miscarriages of justice.   

Lesson 2 – Government officials need to understand their own systems, how they work and how they might influence their decision making.   

3. Systems must be transparent   

The Post Office was told about weaknesses in the system, but the sub-postmasters themselves were not made aware. The Government also will not publicly declare how a lot of their algorithms work, despite admissions of the risk of bias.  

The Home Office has a tool to detect potential sham marriages, which flags a disproportionate amount of people from Albania, Bangladesh, Pakistan, Greece, and Jamaica. But those people do not know what factors the self-learning algorithm has used to judge their marriage as suspicious, even though they are the ones affected by that decision.  

Lesson 3 – All public authority use of algorithmic and automated decision-making should be transparent and information should be made available and publicly accessible (such as through the submission of reports to the Algorithmic Transparency Recording Standard).

This should include a requirement for public authorities to publish Equality Impact Assessments.  Government should also introduce a statutory duty requiring public bodies to inform a person subject to a decision that an Automated Decision Making (ADM) tool has been used, and how it is being used.  

4. Automated systems must be scrutinised independently  

Although the Post Office hired independent forensic accountants Second Sight to look into the Horizon system, the Post Office dismissed the serious concerns their reports raised.

Lesson 4 – Independent, expert scrutiny is required. Any regulator needs to be adequately resourced and given the right tools to enforce the regulatory regime, including powers to proactively audit public ADM tools and their operation. 

5. People affected by automated decisions must be able to contest the outcomes  

The sub-postmasters trying to defend themselves from prosecution were not allowed access to the system and data that led to their conviction. 

Similarly, nobody can contest decisions made with the help of AI or by automated systems if they do not know that an algorithm has been involved and the authority has not been open about how the system works. 

Lesson 5 – Accessible means of redress must be available to people subject to decisions made with the use of AI and automation. This requires transparency – see lesson 3 – but also an effective regulator (see lesson 4) and access to legal aid where it is needed to pursue a judicial review challenge. 

Conclusion  

We know the human impact of the Horizon scandal: bankruptcy, false convictions, and even suicide.  

The UK Government is expanding use of AI and automation in an attempt to detect fraud attempts, cut costs and improve accuracy. 

But if we ignore the failures of the Horizon system itself and how the Post Office dealt with its mistakes, the Government could be on track to create its own widespread computer-based scandal.