Eleanor Shearer and Matt Davies

Code Dependency: Big Tech, market power and why progressives need an AI strategy

May 17, 2025

Volume 33, Issue 1

Labour has promised to make Britain an ‘AI superpower’, but its approach risks reinforcing tech monopolies and sidelining public benefit. AI policy needs to focus on democratised access, independent research, and ownership structures that align new technology with the public good.


During its first months in office Labour’s rhetoric on artificial intelligence (AI) has been a peculiar mix of bullish and deferential. On the one hand, the AI Opportunities Action Plan launched in January 2025, presented AI as critical to the delivery of the government’s missions, especially on growth, and made explicit the ambition for more ‘homegrown’ AI companies to challenge the dominance of Silicon Valley.1 On the other, Peter Kyle warned early in his tenure as technology secretary that the government must adopt ‘a sense of humility’ in its dealings with tech giants like Meta, Google and Microsoft.2

Humility, of course, can be a virtue. Few would claim certainty when it comes to the future of AI, a technology touted by its advocates as potentially world-changing, and increasingly framed as ‘a critical strategic technology for the geopolitical and economic ambitions of nation-states’.3 In the US, the interests of the largest tech firms seem increasingly intertwined with the agenda of a Trump presidency eager to double-down on Biden’s ‘national security synthesis’ – shorn of its ‘green’ and progressive bona fides – and eliminate any threat to American ‘innovation’.4 Against this backdrop, humility risks shading into deference. 

But to whom is the government bending the knee – and what is at stake? One challenge here is that social democrats and progressives have not yet formed a coherent account of the political economy of AI – the conditions under which these systems are produced and deployed, and their benefits distributed. In particular, while the AI Opportunities Action Plan speaks to the idea that the state has a critical role to play in shaping new technologies, it remains either blind or resigned to the extraordinary market power wielded by just a handful of firms in the AI sector. 

AI is not the only policy area where Labour are pursuing an uneasy marriage between an activist state and private capital: Starmer and Reeves often speak of the necessity of ‘partnership’ to deliver their industrial policy aims. But a partnership that offers up the levers of state simply to grease the wheels of profit for BlackRock or Microsoft is not an equal or desirable one. What is at stake, in Labour’s AI strategy as much as in its overall governing philosophy, is whether or not the government can wake up to the reality that there is only so far they can go for inclusive growth without a deeper reordering of market logics and coordination.

Understanding the political economy of AI

‘AI’ is not one thing. It has no commonly accepted scientific definition and is primarily a marketing term referring to a wide range of computational techniques like machine learning, natural language processing, and deep learning. What unifies these technologies is that they are all considered capable of performing tasks that might traditionally require human intelligence to complete. As such our public imaginary, when it comes to AI, has been dominated by the frame of ‘automation’: of intelligent machines ushering in either the dystopia of mass-unemployment, the utopia of a life free of drudgery, or something in-between.

In recent years, frontier AI research, including OpenAI’s GPT-4 family of models, which underpin ChatGPT, has been dominated by a particular machine learning technique, ‘deep learning’. Deep learning uses vast amounts of data and computational power to build models with generalised capabilities and deliver useful outputs from these models.5

The resource-intensive nature of deep learning means that the current generation of advanced AI models cannot exist without the infrastructure of the large incumbent tech firms, upon which even supposedly ‘open’ AI projects are closely dependent.6 The result is an extraordinary concentration of power in a small number of firms, whose monopoly over critical infrastructure and access to talent is complemented by their ability to set the agenda for AI research through, among other tools, the use of venture capital funding to develop often-predatory relationships with new startups and sponsorship of academic partnerships and conferences.7 The combination of these factors gives the Big Tech firms significant control over our technological future.8

Credible commentators have expressed scepticism that this future will ever materialise – that the very prospect of it is in large part an ideological ploy to buy time for the industry to secure opportunities for rent-seeking.9 Generative AI is not – yet – a profitable business, with no less than Goldman Sachs’ Head of Global Equity Research having pointed out that ‘eighteen months after the introduction of generative AI to the world, not one truly transformative – let alone cost-effective – application has been found’.10 The Tony Blair Institute’s recent report purporting to show that rolling out AI across the public sector could produce £40bn in cost savings through productivity gains was ridiculed for using ChatGPT to categorise existing public sector tasks, predicting if they could be performed by AI, and calculating the cost savings as like for like.11 The historic sell-off in tech stocks triggered by the launch of the Chinese company DeepSeek’s R1 model has been cast by some as the moment the generative AI bubble definitively burst – a ‘Minsky moment’ for an overinvested technology with few moats for incumbents.12

Despite all of this, it remains the case that seven of the top ten firms in the world, by market capitalisation, are to varying degrees ‘AI companies’.13 Capital spending on AI is forecast to hit almost a quarter of a trillion dollars this year, and private consortia are planning spending on US AI infrastructure that has been likened in scale to the Manhattan Project.14 In public, Big Tech firms continue to project confidence that their AI systems will – given enough time – generate significant returns.15 And many in the AI policy community remain sceptical of DeepSeek’s achievements, and bullish about the prospects of the incumbent firms, arguing that investment in compute will continue to yield competitive advantages which may yet be unassailable in the long term.16

So let’s entertain, for one moment, that the Big Tech vision of ubiquitous AI comes to pass. How are or might these technologies be used? At core, they enable collection and analysis of large amounts of data, allowing more fine-grained analysis of a broad range of natural and social phenomena. In the natural sciences, this has enabled significant breakthroughs such as Google DeepMind’s AlphaFold, although these – commendable – projects remain an outlier in the industry, and some of the company’s other findings have been disputed.17 In employment contexts, AI increasingly takes the form of greater employer control over the tasks carried out by employees. And in the provision of services in both the public and private sectors, the rollout of AI has led to the increased automation of decision-making, which can bring with it benefits in terms of speed, efficiency, and user experience, but also create significant problems in the form of bias and reduction in accountability.

All of which is to say that while these technologies enable a form of automation, to take ‘automation’ as our primary frame is to miss the bigger picture of what AI is doing under the current paradigm. Job displacement isn’t likely to come in one wave but will instead be characterised by the greater fragmentation and compartmentalisation of work, and the upwards redistribution of control. We need to understand that humans do, and will continue to, make key decisions, but the management structures introduced by AI will move those decisions around and make them less legible, and tractable, to the workers subject to greater algorithmic control.

Once we understand this emerging political economy, the stakes become clearer. A small number of Silicon Valley firms monopolise the critical infrastructure behind AI and set the direction of travel, with a vision for profiting from general purpose systems that aligns their power with that of large employers looking to cut costs through automation.18 In this sense, despite Big Tech’s efforts to present the AI revolution as uniquely unprecedented, there are familiar currents at work under the surface of corporate power and efforts to upwardly redistribute control and agency upwards and away from democratic contestation, all of which shape the justification for government intervention. 

Build British? Labour’s AI Opportunities Action Plan

Just weeks after their election victory, Labour announced a review into how AI could drive economic growth in the UK. The choice of Matt Clifford to lead it, an adviser instrumental in the founding of the AI Safety Institute (now the AI Security Institute) under Rishi Sunak, suggested more continuity than change in the government’s approach. But when the AI Opportunities Action Plan arrived in January 2025, there were signs of a broadening of scope beyond safety and into a positive vision for how the state could and should shape the development and deployment of AI. 

The plan has three aims. The first is to develop the foundations for AI development, principally compute infrastructure, data and access to talent. This makes explicit what Silicon Valley often prefers to keep implicit, namely that the infrastructure and data on which advanced technology has long depended is created or enabled by government. The second aim is to accelerate the diffusion of AI throughout the public and private sectors. The theory here is that efficiency and productivity gains will help support Labour’s five missions, with use cases in health and administration singled out as some of the ways public services might be improved. And thirdly, the plan aims to support ‘homegrown AI’, transitioning in the long term from reliance on foreign companies and instead supporting new national champions.

The government is right to express scepticism that the ‘invisible hand’ can deliver the innovations the UK needs, and to argue instead for a more activist approach. However, the goal of their approach is not so different from the Silicon Valley vision: expanding the size of the AI sector and accelerating adoption.19 The main influence of the state is envisioned as changing the geography of AI activity, rather than the type of research conducted. AI development is framed as a zero-sum competition for investment and a small pool of talented researchers, with the UK in an ‘arms race’ with other nations to attract people and capital. Furthermore, while the plan acknowledges ‘who builds’ matters, it occludes a further question: according to whose values? The critique is not one of monopoly power or the limits of profit and market share as motives according to which AI is built. Rather, the argument is that the UK’s influence on AI will increase by having more ‘homegrown champions’, even if those champions end up similar in terms of their structure or market power to OpenAI or Meta. Or, indeed, if those champions end up bought out by Meta or Google – as happened to muchvaunted local success DeepMind – in the absence of alternative credible business models beyond scale and monopoly power.

Labour’s theory as to why the public will benefit from greater government involvement (and investment) in AI is currently somewhat tenuous. Starmer’s political focus in announcing the plan has tended to be on the public service use cases, which may feel the most tangible to voters: faster assessments and diagnoses of strokes or cancers, for example, could plausibly make the difference between life and death. The judicious use of AI in public services could well be transformative, but private sector companies that develop these tools are not charities. Their pursuit of profit and market-share can often put them at odds with sustainable and cost-effective deployment of technologies for public good. The collapse of Babylon Health is an instructive case here, a company which provided telemedicine services along with an AI chatbot. The company significantly oversold its AI capabilities as a way to secure VC funding, and presented its business case to the NHS as the ability to save significant costs when many patients chose to use just the chatbot rather than needing a GP, a promise that failed to materialise.20

In other words, public benefit from AI is contingent on the structure of the market, and the incentives under which such tools are constructed. The risk with Labour’s plan is that the government effectively provides a public subsidy to private companies to trial their technologies without demanding reciprocal rights, stakes or benefits in return. Take, for example, data and compute, two of the main infrastructural components of AI that the plan rightly highlights might be provided by the public sector to shape AI development. Public compute is a promising lever to try and pluralise the AI market.21 However, while the plan does mention the role of broadening compute access in empowering startups and academics to do AI research, it also frames compute and data access as part of an overall package to attract private entrepreneurs and investments, rather than as a means to fundamentally alter the structure or ownership of the sector. 

In addition, the assertion that benefits will somehow trickle down to the British public if firms making large-scale AI models choose to base themselves here is no more credible than the Blairite promise of inclusive growth from a booming finance sector. Both of these industries tend to be highly geographically concentrated and employ a few highly skilled people, amassing significant power at the expense of those at the sharp end of financialisation or automation. Without any convincing account of why even ‘homegrown’ champions would be free from the influence of Big Tech, given their capture of the AI ecosystem, investment will likely flow toward projects that can be profitably commercialised by large tech firms, rather than those that might better serve public needs. What is more salient than the nationality of owners is the structure of AI companies, specifically their drive towards securing market share or profit at any cost – whether that is the disruption of livelihoods or the degradation of the atmosphere.   

The alternative

Labour’s current approach to AI may have its shortcomings, but it should be commended for identifying and seeking to strengthen many of the levers and institutions the state could and should use to shape the AI market. The way forward is therefore not necessarily a radically new policy agenda, but rather a change in how these existing levers and institutions are used. Specifically, the government should be more assertive in what reciprocal rights it obtains for its support of the AI sector – for example, ownership stakes, commoning of privately held datasets, and open sourcing of models could all be conditions for using government infrastructure, not simply hollow promises of more investment in the UK. 

Labour should also remain sceptical of the ‘bigger is better’ AI paradigm that it is Silicon Valley’s interest to present as the cutting edge of AI research. Instead, it should critically evaluate which AI development paths would really best serve the national economy and pursue these paths with the means at its disposal. It’s true that, without the deep pockets to finance endless capital expenditure, the UK is unlikely to compete at the ‘frontier’ any time soon, but we don’t need to: there is arguably a lot of public value to be squeezed out of current generation foundation models, if deployed in the right context.22 The success of DeepSeek demonstrates the potential for the UK of adopting a ‘fast follower’ model whereby instead of begging for scraps from Big Tech’s table, we wait and see what innovations yield the most value, and look for cost-effective ways of doing it ourselves.

Achieving this in practice will involve the canny use of public resources to pluralise the AI research environment and build independent capacity outside of Big Tech. This might mean using the new proposed UK Sovereign AI coordination and funding body to support alternative research paradigms or implementing access policies for public compute and data resources in the new National Data Library that favour smaller organisations, independent researchers, public entities, and nonprofits. To ensure that public resources flow to these actors – rather than being hoovered up by Big Tech – measures should also be taken to reduce talent and IP drain and predatory corporate partnerships, with robust regulatory action encouraged when necessary.

Established fewer than two years ago, the UK’s AI Security Institute (AISI) provides an object lesson in what can be achieved when the public sector sets its mind – and its resources – towards the rapid acquisition of technical expertise. It is already a globally-respected voice on AI governance, represents one of the most significant clusters of AI expertise outside of industry, and will only become more important now that the future of its US sister institute is in doubt.23 All of this said, while AISI ought to be commended for much of its work so far – not least its admirable commitment to growing the field of systemic safety – its focus on evaluating the safety of the foundation models developed by the leading AI developers means that it risks ending up as the provider of free services (and free PR) to Big Tech.24 The takeaway for Labour is that simply throwing public resources at AI won’t be the most cost-effective, or the most strategic, way of developing the UK’s AI capacity: public investments need to be situated within an analysis of how they will affect power dynamics in the sector at large.

Even if undergirded by muscular public capacity, and a strong strategy to avoid capture by private interests, the roll-out of AI across the economy is unlikely to produce straightforward – or straightforwardly positive – effects. AI is irreducibly sociotechnical: it influences and is influenced by the social contexts in which it is deployed, often creating unintended and profound ripple effects.25 This in turn means that Labour should also not push widespread adoption of AI tools without concomitantly strengthening safeguards for those likely to be at the sharp end of these changes, and ensuring that it has a robust plan for strengthening data protection and worker rights. While top-down strategies to safeguard jobs are necessary, as advocated for by our colleagues at IPPR, attention should also be paid to the role of organisations like trade unions in managing and shaping changes in work.26 The Trades Union Congress’s AI Bill Project represents one proposal for shaping new rights and safeguards for workers to ensure the responsible deployment of AI across the economy.27  

Conclusion 

The UK stands at a crucial decision point in its approach to AI development. While the current strategy of courting major tech companies has attracted investment, it risks creating dangerous dependencies and missing opportunities to develop AI in ways that better serve the public interest. We should aspire to more than making this dependency on British-based rather than Americanbased firms. A more assertive approach, focused on building independent capacity and democratising access to AI resources, could better position the UK to capture the benefits of AI while managing its risks. This transition will require political courage and a willingness to challenge the dominant narrative of AI development. However, the potential benefits – including more broadly shared economic gains, greater technological independence, and better alignment with public needs – make such a shift not just desirable but necessary for the UK’s technological sovereignty and economic future.


Eleanor Shearer is a Senior Research Fellow at Common Wealth. She previously worked for the Institute for Government and the Tony Blair Institute.

Matt Davies is Economic and Social Policy Lead at the Ada Lovelace Institute, and a postgraduate researcher at the London School of Economics. They previously held various roles in the Labour Party, including as a political advisor to members of the Shadow Cabinet, and as a researcher for Chi Onwurah MP.

Notes

  1. AI Opportunities Action Plan, Department for Science, Innovation and Technology, 13 January 2025. 
  2. Oliver Wright and Mark Sellman, ‘Britain must treat tech giants like nation states, minister warns’, The Times, 12 November 2024.
  3. Amba Kak and Sarah Myers West, eds., AI Nationalism(s): Global Industrial Policy Approaches to AI, AI Now Institute, March 2024. 
  4. Andrew Yamakawa Elrod, ‘What Was Bidenomics?’, https://www.phenomenalworld.org, 26 September 2024, ‘Cruz Calls Out Potentially Illegal Foreign Influence on US AI Policy’, https://www.commerce.senate.gov, 2 December 2024. 
  5. Sonya Huang and Pat Grady, ‘Generative AI’s Act o1: The Agentic Reasoning Era Begins’, https://www.sequoiacap.com, 9 October 2024. 
  6. David Gray Widder, Meredith Whittaker and Sarah Myers West, ‘Why ‘open’ AI systems are actually closed and why this matters’, Nature No 635, 2024, pp827–833. 
  7. Krystal Hu and Harshita Mary Varghese, ‘Microsoft pays Inflection $650 mln in licensing deal while poaching top talents, source says’, https://www.reuters.com, 21 March 2024. Cecilia Rikap, ‘Dynamics of Corporate Governance Beyond Ownership in AI’, https://www.common-wealth.org, 15 May 2024.
  8. Meredith Whittaker, ‘The Steep Cost of Capture’, interactions Vol 28 No 6, 2021, pp50-55. 
  9. Edward Zitron, ‘Godot Isn’t Making It’, https://www.wheresyoured.at, 3 December 2023; Brian Merchant, AI Generated Business, AI Now Institute, December 2024. 
  10. ‘Gen AI: Too Much Spend, Too Little Benefit?’, https://www.goldmansachs.com, 25 June 2024.
  11. Alexander Iosad, David Railton and Tom Westgarth, Governing in the Age of AI: A New Model to Transform the State, Tony Blair Institute for Global Change, 20 May 2024. 
  12. ‘Tech stocks slump as China’s DeepSeek stokes fears over AI spending’, The Financial Times, 27 January 2025. Dylan Patel and Afzal Ahmad, ‘Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI’, https://semianalysis.com, 4 May 2023. 
  13. Daniel Liberto, ‘Biggest Companies in the World by Market Cap’, https://www.investopedia.com, 16 October 2024. 
  14. Anurag Rana and Andrew Girard, ‘Big tech 2025 capex may hit $200 billion as gen-AI demand booms’, https://www.bloomberg.com, 4 October 2024. See Noam Brown, https://x.com, 21 January 2025. 
  15. See Sam Altman, https://x.com, 28 January 2025.  
  16. For e.g. Lennart Heim and Sihao Huang, https://www.chinatalk.media, 26 January 2025.
  17. See Martin Tisné, https://www.linkedin.com, 21 January 2025. Katyanna Quach, ‘DeepMind AI helps cook up ‘novel’ compounds - with sides of controversy’, https://www.theregister.com, 31 January 2024. 
  18. Merchant, op cit.
  19. Matt Davies, A Lost Decade? The UK’s Industrial Approach to AI, AI Now Institute, 12 March 2024. 
  20. David Kampmann, ‘Venture capital, the fetish of artificial intelligence, and the contradictions of making intangible assets’, Economy and Society Vol 53 No 1, 2024, pp39-66. 
  21. Eleanor Shearer, Matt Davies and Mathew Lawrence, The role of public compute, Ada Lovelace Institute, 24 April 2024.
  22. Elliot Jones, Foundation models in the public sector, Ada Lovelace Institute, October 2023 
  23. Kaustuv Basu, ‘Role of AI Safety Institute Uncertain After Trump Repeals EO’, https://news.bloomberglaw.com, 22 January 2025. 
  24. Chris Summerfield and Shahar Arvin, Advancing the field of systemic AI safety, AI Security Institute, 15 October 2024. 
  25. Brian Chen and Jacob Metcalf, A socio-technical approach to AI policy, Data & Society, 28 May 2024. 
  26. Carsten Jung and Bhargav Srinivasa Desikan, ‘Transformed by AI: How Generative Artificial Intelligence Could Affect Work in the UK - and How to Manage It’, https://www.ippr.org, March 2024. 
  27. Mary Towers, The AI Bill Project, TUC, 18 April 2024.