The Department of Work and Pensions’ 2021-22 accounts published last week revealed that it has been trialling a machine learning algorithm to detect fraud in Universal Credit claims and expects soon to use the model to stop payments before they are even made.  

The algorithm analyses historical data to predict which cases are likely to be fraudulent in the future, without being explicitly programmed by a human. 

This information was confirmed in the National Audit Office report published on Thursday 7 July (paragraphs 48 & 49).

Whilst there were already indications that the DWP has been using automated systems to assess benefit entitlement and or to flag cases for investigation, PLP believes this is the first confirmation that the department is using machine learning to detect fraud in Universal Credit. 

Ariane Adam, Legal Director of the Public Law Project said:  

“Despite many requests under the Freedom of Information Act, the DWP has previously refused to provide details about its use of automation to assess Universal Credit applications. This lack of transparency is very problematic. 

“Without transparency there can be no evaluation, and without evaluation it is not possible to tell if a system works reliably, lawfully or fairly.

“Discriminatory impact is a massive risk. Using algorithms fed by historic big data to make decisions on welfare benefit claims carries a danger of unfairly penalising and discriminating against marginalised or vulnerable groups. This could be, for example, because the historic data may be inaccurate or because it may be tainted by human bias that will be exacerbated by the machine. 

“In the midst cost-of-living crisis, people could have benefits stopped before they are even paid out because a computer algorithm said ‘no’.” 

The NAO report notes that the DWP intends to monitor the model for unintended bias and is aware that if groups with protected characteristics are disproportionately impacted, the model could obstruct fair access to benefits.  

Ariane Adam said:  

“Departments across Government need to commit to a great deal more than just being ‘aware’ of the risks. We need a clear commitment that all Government departments will be transparent about how they use algorithms.

“The presumption should be that detailed information about how automated decision-making tools work is made available and any data and analysis gathered from trials is published without charities having to make endless FOIA requests. Any exemptions to this presumption must be justified by Government and be necessary and proportionate. Exemption should not be the default.”