The Public Accounts Committee’s recent report referred to evidence from PLP in reaching its conclusion that the DWP “has not yet done enough to understand the impact of machine learning.”

In response to the PAC report, the Department of Work and Pensions has committed to publishing analysis of bias in its automated systems.  

In PLP’s view, this is a step in the right direction, but more must be done to alleviate concerns about systems’ fairness and transparency. 

What has come out of the PAC report and the DWP’s response?

Discrimination in DWP algorithms 

Older people, people with disabilities and people of certain nationalities were discriminated against by one automated system used to detect fraud in Universal Credit advances claims, as PLP’s evidence highlighted. 

When the Committee raised our concern that the DWP was rolling out similar models without concluding that they were fair, the DWP admitted this pilot algorithm “did not work very well at first” and needed to be tested on a small group before the wider rollout. The DWP has now promised to use this approach with new algorithms to ensure accuracy. 

PLP agrees that the DWP should take appropriate precautions but remains concerned that it has refused to explain how it would reach a conclusion that its systems are non-discriminatory. 

Systemic lack of transparency 

In response to our requests for information about these automated systems, the DWP has been systematically opaque. 

Since the Committee flagged this lack of transparency as a concern, the DWP has committed to publishing its ‘fairness analyses’ of its automated systems in its Report and Accounts 2023-24. 

However, we still have concerns:  

  • The Department will presumably not provide a view on whether the model is fair until next year, although claimants are already being referred for investigation on the basis of this model 
  • Nor will this step alone adequately address PLP’s concerns about the potential for bias in such systems, thanks to limitations in the available data 
  • The DWP “told [the Committee that] it did not want to provide any further detail on how it will prevent unfair impacts,” which highlights the insufficiency of current DWP data collection in relation to certain protected characteristics. 

PLP is grateful to the Committee for raising our concerns about the DWP’s use of automated systems. 

We welcome, as a first step, the commitments made by the DWP in response; however, we regret that the DWP has not been more transparent at this time and has opted, instead, to wait until next year to adequately explain whether its systems are non-discriminatory. 

Read PLP’s full evidence here.