Automating or ameliorating inequality? How algorithms and artificial intelligence present Labour new challenges on welfare

Rory Weal | Renewal 31/3

Summary: We are witnessing the rise of ‘algorithmic residualism’. Welfare provision is becoming increasingly threadbare, and almost unthinkingly reliant on Artificial Intelligence and algorithms to administer the policy logic of a residualised welfare state. If Labour comes to power without a serious policy agenda underpinned by a moral mission and vision for the welfare state, the logic of algorithmic residualism risks taking deeper root over the coming years.

Artificial intelligence and algorithms are reshaping our societies in the most fundamental of ways, from the functioning of our economy to how we relate to one another. Certain algorithms exert their influence before our eyes – sometimes quite literally, on our news feeds. But there are other arenas where the existence and impact of algorithms and AI is obscure, and where public scrutiny is limited.

Welfare and social security is one such area. Here the AI revolution may be just as influential as any welfare policy commitment in party manifesto’s ahead of next year’s general election. We are witnessing a creeping operational reliance on AI and algorithms to determine eligibility for access to social security, in the UK and across the developed world. This is having profound consequences for the relationship between citizen and state and the principle of equity in the welfare state.

If Labour forms the next government, the party will be taking over a social security system that has become increasingly threadbare and residualised. Addressing the welfare state’s increasingly unthinking reliance on AI and algorithms to administer this policy logic – what this article terms ‘algorithmic residualism’ – will be integral to any efforts to build a more compassionate system for the millions of people who turn to social security for ongoing support or to weather crises each year.

In its most basic form, AI and algorithms offer welfare states sophisticated statistical modelling of the probability of how a population will behave – there is little inherently wrong with that, indeed there may be utility in it. But when put in service of policies which overzealously focus on weeding out claimants and reducing costs over and above providing effective support the risks are profound, and it is this road that we appear to driving down. Evidence from across the world shows us that, without course correction, this logic of ‘algorithmic residualism’ threatens to bake in many of the policy trends towards a permanently shrunken welfare state.

This presents profound challenges, as well as opportunities, for a Labour Party that has thus far avoided substantial commitments on welfare. Without a serious policy agenda underpinned by a moral mission and vision for the welfare state, the logic of algorithmic residualism risks taking deeper root over the coming years.

The quiet rise of algorithms in the welfare state

While algorithms are being increasingly used to make decisions about welfare allocations in the UK, public and Parliamentary scrutiny has been limited. It has taken two investigations by the Guardian to reveal the extent of their usage in local and national government. In 2020 one investigation found that almost half of councils were using algorithms to make welfare allocation decisions, from benefits to council housing.[i] In July this year the Guardian similarly revealed that the DWP has recently scaled up its use of machine learning and advanced analytics to assessing Universal Credit applications, as part of its efforts to reduce money lost to fraud and error.[ii] This lack of transparency is why the UN Special Rapporteur has said countries like the UK risk ‘stumbling zombie-like into a digital welfare dystopia’.[iii]

For critics like the UN Special Rapporteur, the dystopian quality of algorithmic residualism rests in how its predictive capacities can be deployed to surveil and punish people who have committed no wrongdoing. Even when a human makes the final decision, alarming cases across the world demonstrate the ease with which algorithms can associate protected characteristics such as disability or race with non-compliance, subjecting marginalised groups disproportionately to invasive investigations or benefit suspensions with little reasonable recourse.

There are already some signs of this discriminatory potential in the UK. In February 2022, a coalition of disabled activists from Greater Manchester Coalition for Disabled People wrote a legal letter to the Department of Work and Pensions to register their shared suspicion that algorithms were ‘over-picking’ and unfairly targeting disabled people with baseless accusations and investigations for fraud.[iv] The coalition argued the stress created by this can be ‘debilitating’ and have been calling for transparency over how the algorithm works. The DWP have so far refused the provide this, arguing it would make it easier for people to game the algorithm. While studies have been limited, not helped by the deep lack of transparency, these are clearly causes for concern.

To understand more about the path we are heading down, we need to situate algorithmic residualism in its global context. Algorithmic allocation systems have been developed by a powerful ‘govtech’ industry worth $440 billion across the world featuring giant consultancies such as PWC, Accenture and Cap Gemini.[v] The products this burgeoning private industry offer are increasingly alluring to under-funded and under-capacity welfare states, facing immense short-term pressure to cut costs. Many countries have moved further and faster in their embrace of these technologies – and their experiences show us what may soon be at stake in the UK.

Algorithmic residualism in international perspective

When a new fraud detection system was introduced in 2014 Michigan, United States. there was little fanfare, as you might expect from what was largely an operational adjustment to how cases of welfare fraud were identified and punished. The new system – the Michigan Integrated Data Automated System, or MiDAS — was established to both detect fraud and automate the process by which people were charged and fined for such misdemeanours. Michigan was just one of many states which, struggling with budgetary pressures and staff shortages, made the calculation that improved fraud detection and a supposedly objective or unbiased approach to sanctions could only be a good thing for recipients and state coffers alike.

But the new approach ended in disaster. Some 48,000 fraud accusations were made in error against recipients of unemployment insurance in the state, five times higher than under the previous system.[vi] The automated implementation of repayment, interest, and civil penalties charged claimants four times what had been supposedly fraudulently claimed. When a review was undertaken, they found that 93% of all the system’s fraud determinations had been wrong.

This was a profound programming failure – the system coded fraud as when information provided by claimants did not match up to other government and employer records. But disastrously it failed to distinguish between intentional fraud and simple human error and mistakes. The messages that claimants received were also designed in a way which led people to inadvertently admit to having committed fraud. Years of legal battles have followed, and the state has been forced to compensate those affected.[vii] But in other cases from across the world, the failings are more subtle – and cannot be reduced to as clear cut a human programming error. This is particularly troubling in settings where machine learning and ‘black boxes’ identify relationships in ways that humans struggle to disentangle, and therefore lack the basic accountability societies typically expect of policymaking.

The perils associated with black box algorithms in social welfare settings was typified in a recent case in the Netherlands, where a new fraud detection system was introduced by the Dutch Ministry of Social Affairs and Employment in 2014. The system, called “SyRI”, was intended to identify and respond to fraud in a range of settings including taxes and social security. The algorithm sought to identify relationships between risk indicators and likelihood of fraud. These indicators were neither known nor intentionally selected by the programmers and included things like residence and education level. On this basis it produced risk reports on individuals which could then be investigated by authorities, but which the individual had no knowledge of and could not appeal or seek insight into how the risk report had been determined.

Even though there was still a degree of human oversight, unlike the fully automated Michigan case, the results were dire. Punitive penalties were applied from these risk reports, which pushed tens of thousands of families – disproportionately belonging to ethnic minorities – into poverty.[viii] Many of these judgements were later revealed to be made in error. Summarising the harms, Aleid Wolfsen, the head of the Dutch privacy authority, said:

“For over 6 years, people were often wrongly labeled as fraudsters, with dire consequences … some did not receive a payment arrangement or you were not eligible for debt restructuring. The tax authorities have turned lives upside down”.

It took years for these harms to be brought to light, when a case was taken to the Hague for human rights violations. In 2020 the Court argued the practice contravened the right to private life in a way that was unnecessary and disproportionate.[ix] Their verdict argued the system was not transparent to those it targeted, and this was damaging because of the risk of discrimination inherent in the model. This has rightly been heralded as a major intervention in the regulation of welfare algorithms – but the verdict presents a double-edged sword. The very lack of transparency the Court identified is what makes it so hard to bring other cases against systems of algorithmic residualism. There was no conscious discrimination, as the ‘black box’ algorithm independently identified relationships which arose from existing inequalities. Finding evidence of individual privacy violations is like finding a needle in a haystack in the opaque world of algorithmic residualism. Indeed, privacy frameworks may no longer be fit or purpose when it comes to assessing and regulate against these harms.

A self-fulfilling prophecy: algorithms and the automation of inequality

The cases of Michigan and Netherlands highlight some of the inherent risks associated with the unconstrained use of algorithms in welfare settings. These cases demonstrate the tendency towards profiling and targeting based on traits and characteristics, depriving individuals of public goods they would otherwise be accorded. This in turn is difficult to regulate – because black boxes often cannot translate their reasoning or logic into human terms, they are hard to hold to account.

There is a substantial literature which explains why the negative impact of algorithmic residualism is downstream of wider societal inequalities. In their exploration of ‘big data’s disparate impact’ Barocas and Selbst argued that ‘in certain cases, data mining will make it simply impossible to rectify discriminatory results without engaging with the question of what level of substantive inequality is proper or acceptable in a given context’.[x] If algorithms discriminate against people with disabilities, it is because the system makes itself highly burdensome to this group, for example through stressful and time-consuming applications and burdensome assessments. In this context, disabled people may not engage as the system expects – but instead of according them support, behaviours coded as non engagement risk further entrenching the existing exclusion. This is the manner is which algorithms are not themselves the core problem, but they exacerbate existing inequities, driven by the policy agenda they are deployed to serve.

Where algorithms do provide an independent risk, is in the false veneer of objectivity and fairness they offer governments. This is in turn hard to challenge in public policy, because as the authors argue ‘what seems to be operational concerns’ are in fact ‘normative judgments in disguise, about which there is not likely to be consensus’. This was typified in the Michigan case – the way in which ‘fraud’ was coded and labelled was so dramatically beyond what a common sense understanding of fraud would be, yet because of the apparent objectivity of the system no one thought to dig deeper into what was happening – and the result was disastrous. 

Virginia Eubanks describes these processes as ‘automating inequality’.[xi] The social function of algorithms which are tasked to identify difference and nonconformity, for the purposes of depriving those individuals of support or exposing them to heightened levels of surveillance, intrinsically runs against a meaningful commitment to reducing inequalities through social policy. Moreover, in predicting non-compliance with onerous standards, the very existence of that non-compliance is made more likely – as the groups deemed less compliant sees support reduced and removed. In this sense algorithmic residualism bears a self-perpetuating prophecy – instead of neutrally predicting, its own biased predictions are dramatically brought into being. The consequences can be appalling, as the cases from Michigan and the Netherlands show all too clearly.

The novel challenge of the ‘objective’ predictive turn

There is nothing unique in the reliance on a supposedly objective, value-free ideal of data-led decision making. Similarly, residualism has for much of our modern history exerted a powerful grip on the politics and bureaucracy of the welfare state. Neither of these features are unique. It is the predictive capacities of algorithmic residualism that marks a novel and concerning juncture. This has significant consequences for individual agency in relation to the state, that go beyond the bread and butter concerns of traditional welfare debates.

Sam Sellar and Greg Thompson have attempted to characterise the individual in this context as shifting from ‘becoming a statistic’ to ‘the becoming-statistic’.[xii] They argue that whereas the old world ‘pulls a modelled past (as comparison) into the present as a way to shape the future…[today] the mechanism of prediction works in the other direction by pulling a modelled future into the present’. This modelled future is a pessimistic future, which seeks out the potentialities for duplicity, deception, fraud, and profiles according to these risks. In doing so, it forecloses and denies support to people in need based on often tenuous characteristic associations rather than wrongdoing. The allocative principles of the welfare state become even more arbitrary and punitive. Whereas the policymakers of the old world may have been at risk of flattening and simplifing complex lives into units of data, this could still be done in service of a moral vision of the good society and a new world. Today the new world is forecast before it can be imagined, and that forecast is informed by a heavy bias towards the retention of existing inequalities.

Today, the availability and power to analyse data on a grand scale makes it easier to believe that we can dodge the difficult moral and distributive questions under the guise of rational or objective administration. This belief leads us into a position where at times life-and-death decisions are downplayed as mere operational functions. Case studies from the U.S., the Netherlands, and many other states should dash any naivety or boosterism we may have in the adoption of algorithms in welfare states, and question the false veneer of objectivity which surrounds their usage.

The upshot of this and its implication for Labour is that automated welfare systems cannot exist independently of moral and political choices about whose interests our welfare system should serve. In fact, the spread of algorithmic residualism makes the imperative for strong moral purpose in our welfare state more important than ever.

The case for optimism – using data as servant of an anti-poverty mission

The deployment of algorithms to determine eligibility and allocation of benefits is driving an unsettling reshaping of the relationship between citizen and state in many countries across the world. In opening the door to new forms of discrimination and surveillance so off-putting and invasive only those facing extreme hardship could countenance a claim, the use of algorithms to allocate social welfare could represent the final eclipse of the institutional welfare state to the residual welfare state.

But while being alive to the risks, we must not be overly alarmist or fatalistic. Key actors are turning their backs on the algorithmic residualist approach, and finding new ways to use data in the service of ending poverty and tackling inequality. In England there are signs that, already, many local authorities after brief experimentation, simply dropped the use of algorithms in welfare allocations.[xiii]

Others are finding ways to utilise data to identify where support could be most effectively directed, instead of modelling a world which forecasts non-compliance to deprive people of support. For example, New York City’s ‘poverty tracker’ uses longitudinal data and a broad definition of material hardship to identify the dynamics of economic disadvantage in a particular locale and ensure policy responds more effectively.[xiv] It has already widened the conversation about who is at risk of poverty and broadened policy horizons for who may require support, the opposite effect of the algorithms discussed here. This model is already rolling out to other cities in the US and could be explored in the UK too.

The threat posed by algorithmic residualism should likewise hasten the bigger conversation about how we construct a welfare state around the principles of meaningful support, a genuine income floor, and tailored and integrated support for people to become full participants in society and economy alike. Ultimately this is the kind of policy agenda we should be aspiring to if we are to ensure welfare states use data as a servant, not a master, of their social goals.

Rory Weal is a Churchill Fellow and Thouron Scholar writing on inequality.
He has worked for several years for charities tackling poverty and homelessness.


[i] Sarah Marsh and Niamh McIntyre, ‘Nearly half of councils in Great Britain use algorithms to help make claims decisions’, The Guardian, 28 October 2020 https://www.theguardian.com/society/2020/oct/28/nearly-half-of-councils-in-great-britain-use-algorithms-to-help-make-claims-decisions

[ii] Robert Booth, ‘AI use widened to assess universal credit applications and tackle fraud’, The Guardian, 11 July 2023 https://www.theguardian.com/society/2023/jul/11/use-of-artificial-intelligence-widened-to-assess-universal-credit-applications-and-tackle

[iii] United Nations Human Rights Office of the High Commissioner, ‘A/74/493: Digital welfare states and human rights – Report of the Special Rapporteur on extreme poverty and human rights’, 11 October 2019, https://www.ohchr.org/en/documents/thematic-reports/a74493-digital-welfare-states-and-human-rights-report-special-rapporteur

[iv] John Pring, ‘Legal letter asks DWP for information on ‘discriminatory’ secret algorithm – Disability News Service’, Disability News Service, 17 February 202,2 https://www.disabilitynewsservice.com/legal-letter-asks-dwp-for-information-on-discriminatory-secret-algorithm/

[v] Morgan Meaker, ‘The Fraud-Detection Business Has a Dirty Secret’, Wired UK, 7 March 2023, https://www.wired.co.uk/article/welfare-fraud-industry

[vi] Michele Gilman, ‘AI algorithms intended to root out welfare fraud often end up punishing the poor instead’, The Conversation, 14 February 2020, https://theconversation.com/ai-algorithms-intended-to-root-out-welfare-fraud-often-end-up-punishing-the-poor-instead-131625

[vii] Ed White, ‘Thousands of unemployed in Michigan wrongly accused of fraud can seek cash from state’, Associated Press, retrieved 22 April 2023, https://www.freep.com/story/news/local/michigan/2022/07/26/unemployed-wrongly-accused-fraud-can-seek-cash-state/10157193002/

[viii] Melissa Heikkilä, ‘Dutch scandal serves as a warning for Europe over risks of using algorithms’, Politico, 29 March 2022, https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/

[ix] Adamantia Rachovitsa and Niclas Johann, (2022) ‘The Human Rights Implications of the Use of AI in the Digital Welfare State: Lessons Learned from the Dutch SyRI Case’, Human Rights Law Review, Vol 22, Issue 2, April 2022.

[x] Solon Barocas and Andrew D. Selbst, ‘Big Data’s Disparate Impact’, California Law Review, Vol. 104, No. 3, June 2016, pp. 671-732.

[xi] Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, New York, St Martin’s Press 2018.

[xii] Sam Sellar and Greg Thompson, ‘The Becoming-Statistic: Information Ontologies and Computerized Adaptive Testing in Education’Cultural Studies  Critical Methodologies, Vol 16, Issue 5, July 2016, pp. 491–501.

[xiii] Sarah Marsh and Niamh McIntyre, ‘Councils scrapping use of algorithms in benefit and welfare decisions’, The Guardian, 24 August 2020 https://www.theguardian.com/society/2020/aug/24/councils-scrapping-algorithms-benefit-welfare-decisions-concerns-bias.

[xiv] Kathryn M. Neckerman, Irwin Garfinkel, Julien O. Teitler, Jane Waldfogel, and Christopher Wimer,‘Beyond Income Poverty: Measuring Disadvantage in Terms of Material Hardship and Health’, Academic Pediatrics, Vol 16, Issue 3, April 2016.