Does ‘Longtermism’ belong in Labour?

Morgan Jones and David Klemperer

Labour for the Long Term, an organisation whose professed aim is to influence Labour’s policy to consider long term risks, launched at party conference last month. At their packed launch event, invited speakers discussed a wide range of issues: former Welsh Government Environment and Sustainability Minister Jane Davidson discussed the Senedd’s Wellbeing of Future Generations Act; Fleur Anderson MP spoke on the importance of pandemic preparedness; Financial Times journalist Stephen Bush talked about creating institutional incentives towards “resilience”; Luke Kemp, of Cambridge University’s Centre for the Study of Existential Risk, discussed the various threats posed by climate change, nuclear conflict, artificial intelligence, and biological weaponry.

The group hopes to encourage Labour to tackle such issues with ‘Cathedral thinking’. In their first policy briefings, they advocate an eclectic range of measures, such as creating new biosecurity institutions, preventing the use of artificial intelligence in nuclear weapons systems, imposing algorithmic impact assessments, and, strangely, ‘Street Votes’ – a housing policy primarily pushed by the libertarian Adam Smith Institute.

Many of these are admirable initiatives. We should, of course, all care about risks to humanity’s future, and to the wellbeing of the many billions of people who live on this planet. But Labour for the Long Term has not emerged from nowhere. In fact, it has emerged from one specific place: the ‘Effective Altruism’ movement.

***

Inspired by the moral philosopher Peter Singer’s utilitarianism, Effective Altruism’s stated mission is to help individuals maximise the good they can do. Adherents to the movement (usually known simply as EAs) pledge to give at least ten per cent of their income to charitable causes; specifically, to causes which have been assessed to be the best and most effective use of resources. ‘Mosquito nets’ is possibly the phrase most associated with Effective Altruism, because of their low cost (some net) to impact (preventing malaria and other deadly diseases) ratio.

Will MacAskill, a moral philosophy professor at Oxford, has long been a leading light of the Effective Altruism movement. With Toby Ord, he co-founded Giving What We Can in 2009, and the Oxford-based Centre for Effective Altruism in 2012. In 2015, he published Doing Good Better, which outlined the principles of Effective Altruism and helped to make awareness of the movement mainstream. Despite being only a little over a decade old, Effective Altruism has grown quickly, evolving into a network of powerful institutions. It has a small media ecosystem of its own (the podcast of the organisation 80,000 Hours, co-founded by MacAskill, is essential listening for many EAs), and EAs hash out issues on their extremely active online forums. EA has also come to control large sums of money. Effective Altruism is big in Silicon Valley, a place where capital is famously not in short supply, and has begun to attract donors with very substantial resources, who nominally wish to use these resources to maximise their positive impact on the world.

As Effective Altruism has grown larger and richer, however, it has also become less focused on effective means of individual action – giving away almost all of your salary, for example, as MacAskill famously does – and more oriented towards promoting structural change at the global level. With this shift in focus has come a new passion: Longtermism.

A philosophy growing out of EA, Longtermism has quickly become one of the dominant perspectives within Effective Altruism. It asserts that we should consider the future, including the far future, in the decisions we make today, and give significant moral weight to the lives of hypothetical future people. Peter Singer’s famous thought experiment – if you see a child drowning in front of you, it is imperative on you to save them; if that child is in another part of the world and you have the resources to save them, the imperative remains the same, because our moral obligations are not impacted by distance – inspires the EA movement. To make the case for Longtermism, Will MacAskill reinterprets this problem, asserting that just as distance does not act on the reality of our moral obligations, neither does time; he contends that we have significant moral obligations to potential future humans which we should be acting on today. He makes this argument in his new book, What We Owe The Future, which is a popularly-argued (cover endorsement from Stephen Fry) long form case for Longtermism. Very possibly you have read about this book, or seen MacAskill interviewed; since it came out in August, it has been the subject of much media commentary.

The central implication of the Longtermist view is that combatting ‘Existential Risks’ (i.e. those dangers so great that they threaten the billions and billions of potential future humans from ever coming to exist) is the overwhelming moral priority of our time. Such risks are generally held by Longtermists to include deadly pandemics, nuclear and biological warfare, and runaway technology – precisely the issues raised by Labour for the Long Term. In his book, MacAskill advocates a particular focus on ‘governing the ascent of artificial intelligence, preventing engineered pandemics, and averting technological stagnation’.

It is certainly essential to take these kinds of issues seriously. Pandemics, nuclear war, and artificial intelligence are all real dangers, and it is crucial that more time and resources be invested into mitigating the risks. But there are reasons to be wary both of Longtermism as a specific philosophy, and of the approach to Existential Risk that derives from it. For a start, MacAskill displays a curious predilection for libertarian solutions, such as the idea of establishing privately run ‘charter cities’ outside of democratic control as a way to promote economic and political experimentation. More importantly, as even EA-inspiration Peter Singer has pointed out, ‘Viewing current problems through the lens of existential risk to our species can shrink those problems to almost nothing, while justifying almost anything that increases our odds of surviving long enough to spread beyond Earth.’ Indeed, MacAskill’s book is subtitled ‘A Million-Year View’, and in a 2019 paper he suggested that ‘For the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.’

This attitude becomes especially concerning when the far distant future is viewed through the lens of a specific utopian project. In a recent paper, Carla Cremer and Luke Kemp (who spoke at Labour for the Long Term’s launch) argued that the Longtermist movement and its associated field of Existential Risk studies are currently dominated by what they term a ‘Techno-Utopian Approach’. This approach (which Cremer and Kemp trace primarily to Swedish philosopher Nick Bostrom – one of the pioneers of Longtermism, and director of the Oxford-based Future of Humanity Institute where Cremer has been a researcher), combines ’transhumanism’ (the idea that we could create post-human beings capable of living better lives than humans), ‘total utilitarianism’ (the idea that what matters is maximising total happiness), and ‘strong Longtermism’ (the idea that potential future lives are worth no less than current lives).

In this techno-utopian vision, humanity has the potential to eventually create a maximally-technologically-developed cosmos-spanning civilisation in which vast numbers of beings – whether human, post-human, or ‘digital minds’ – will be able to experience hitherto-unimaginable levels of happiness. Humanity’s aim should be to achieve this outcome (which Bostrom calls ‘technological maturity’), and to fail to do so would amount to so much lost potential happiness as to constitute a cosmic catastrophe of unimaginable proportions. Extreme measures are thus potentially justifiable in pursuit of it, with Bostrom positing, for example, a global system of total surveillance with tracking devices affixed around every human’s neck. Moreover, from this perspective, ‘existential risks’ are defined not by their potential to inflict suffering on existing human beings, but by their potential to disrupt the achievement of this technological utopia. In fact, in the techno-utopian view, mere ‘global catastrophic risks’ – which do not threaten the hypothesised utopia, but may nonetheless kill millions of actually existing humans – are of relatively little concern. They are, as Bostrom puts it, ‘mere ripples on the surface of the sea of life’.

Cremer and Kemp show clearly that the work of Longtermism’s most influential figures, all of whom hold substantial institutional power within the world of Effective Altruism, is largely underpinned by some version of these same ideas. They point to Toby Ord (co-founder of the Centre for Effective Altruism), Holden Karnofsky (CEO of EA funding organisation Open Philanthropy), Nick Beckstead (CEO of the FTX Foundation, another funding organisation run on EA principles), as well as to Bostrom and to MacAskill himself. They suggest techno-utopian beliefs have radically skewed Existential Risk research – in favour, for instance, of accelerating certain technological developments, rather than attempting to prevent them, or to subject them to democratic control. The aim, after all, is not to protect existing human civilisation, but to achieve the utopian civilisation of the future. In their embrace of the techno-utopian outlook, MacAskill and Ord would appear to have fundamentally transformed the nature of their movement: by shifting the moral focus of their actions from the present to the distant future, they have shifted Effective Altruism from a pragmatic approach to individual do-gooding, to a transhistorical crusade for a specific vision of the future. They have, in other words, turned it into a political movement.

It is unclear, however, that the EAs understand themselves as such. Speaking to Renewal, Carla Cremer told us that “[EA]s don’t tend to think of their own positions as politically motivated. They often don’t notice the political dimensions of what they do, precisely because they want to act on behalf of all of humanity.” Frequently, she says, they ignore the implicit value judgements baked into what they advocate. Cremer adds, however, that this lack of a political self-understanding is “separate from them now realising that of course they have to go through political processes to influence the world, which they are now starting to do.”

If Longtermism is a political movement, it is one with powerful backers. It has been publicly endorsed by Elon Musk (the world’s richest man), who funds Bostrom’s Future of Humanity Institute, and who recently tweeted that MacAskill’s book is ‘a close match for my philosophy’. The admiration is clearly mutual: BusinessInsider reports that MacAskill texted Elon Musk to offer to help him buy Twitter. Musk is also not the only billionaire to have aligned himself with Longtermism. Bahamas-based crypto-currency billionaire Sam Bankman-Fried (whose business practices have recently come under regulatory scrutiny) has pledged almost his entire fortune of around $12 billion dollars to Effective Altruism, and is primarily focused on Longtermist causes; he has also hired MacAskill as an advisor.

It is not hard to see why Longtermism as an ideology is appealing to billionaires like Musk and Bankman-Fried. Not only do its techno-utopianism and libertarian leanings chime with the Silicon Valley mindset (Cremer tells us that Anna Weiner’s far from positive memoir on the politics and attitudes of the valley, Uncanny Valley, reads like a “psychoanalysis of EA”), but its focus on the distant future offers a justification both for grand megalomaniacal projects (like Musk’s desire to colonise Mars), and for disregarding the significance of inequalities that exist in the present. And crucially, these billionaire backers have not only been funding EA, but also shaping it. In a community originally built on the idea of giving money, those with money to give command authority, and those with the most money to give command the most authority. Bankman-Fried’s FTX Foundation is now one of the most powerful institutions within Effective Altruism, and has been one of the driving forces behind the shift to Longtermism.

Bankman-Fried has recently also been pushing EA into electoral politics: he donated over $5 million dollars to Joe Biden’s 2020 presidential campaign, and says he may spend up to $1 billion in 2024. He also spent more than $11 million on supporting the primary campaign of Carrick Flynn – an Effective Altruist who ran for a United States Congress seat in Oregon on an explicitly Longtermist platform. (Flynn’s campaign was unsuccessful; disinterest in the pedestrian concerns of centre-left voters did not prove a winner with the Democratic selectorate.) These are vast sums, especially when compared to the relatively small amount of money found in UK politics (the Conservative Party won the 2019 election on a budget of £16 million).

***

In this context then, the launch of Labour for the Long Term is notable: it appears to mark the organised entry of Effective Altruism – and specifically, its Longtermist wing – into the UK Labour Party. Although Labour for the Long Term denies being a specifically EA organisation, a significant proportion of its committee – along with two thirds of its advisory board – are part of the EA community. The group’s treasurer was described on the website of Cambridge’s Science and Policy Exchange as active ‘in the EA policy world, where he coordinates an informal group of EAs involved with the Labour Party’. One of its advisors works for an EA-aligned research centre, and has submitted written evidence to parliament jointly with Nick Bostrom. A community organiser for Effective Altruism UK also promoted Labour for the Long Term’s launch on the EA forum in their monthly round-up of Effective Altruist activity.

Of course, this does not mean that Labour for the Long Term necessarily subscribes to the most extreme, techno-utopian Longtermist views held by some of the Effective Altruism community’s most prominent members. Labour for the Long Term’s committee members are keen to stress that it is not an EA organisation, and that a diversity of perspectives are welcome in their group. Other EAs we spoke to were similarly quick to reject the idea that all EAs necessarily adhere to the most radical beliefs of the movement’s leading lights. As Cremer told us, “it is so easy for any one group to escape any particular critique … because EAs membership is indeed diverse enough it is so easy for them to say ‘but no, look at that part of our community, they are not doing that, and they don’t believe that’ and that is always true.”

So the important question, Cremer suggests, is: “who is holding and distributing the money, and how extreme are their beliefs?” This, however, is a question that Labour for the Long Term appear reluctant to answer. When asked by Renewal at their launch event where their funding comes from, their committee said only that they had not received money from any official Effective Altruism organisation (although “there are lots of things that EA organisations care about that we also care about”), and that their funding comes only from donations by “private individuals”.

These donations from private individuals clearly amount to a lot of money. In September, Labour for the Long Term posted a job advert to hire a ‘Policy and Operations Manager’ on a pro-rata salary of £35,000-£50,000 per annum; a conference event of the kind they ran in Liverpool normally costs around £5,000. In addition, the register of members’ interests reveals that in August 2022 – a month prior to the group’s launch – Labour for the Long Term gave £30,000 to the office of Wes Streeting MP (not, interestingly, one of the MPs listed on their website as a supporter). We spoke to the organisers of a number of internal Labour groups to get a sense of the usual budgets for this kind of operation; none of them turned over anything close to Labour for the Long Term’s publicly declared spend.

***

How should Labour members feel about the presence of such a well-funded group with such close ties to the EA movement operating within the ranks of our party? True to their name, EAs are generally effective operators; Labour for the Long Term’s well put together operation is but one piece of evidence for this. For those of us interested in their aims (climate change, pandemic preparedness, or preventing nuclear war), their involvement could be no bad thing. The issues they raise around biosecurity, algorithmic injustice, and technological risk are clearly real and important; the policy briefings they have produced so far have been lucid and well researched.

On the other hand, Labour does traditionally proscribe centralised organisations that tithe their adherents from joining the party in order to influence its policy; members of groups like Socialist Appeal and the Alliance for Workers’ Liberty are automatically expelled. EAs we spoke to were keen to reject the idea that there was any comparison to be made between Effective Altruism and entryist sects (not least in that Labour EAs tend to the centre or right of the party in their factional affiliations; certainly, they are not Trotskyists); then again, when a group of people heavily affiliated with an organisation form a separate grouping to more effectively advance the causes of the first organisation in a context that might be hostile to it, we would tend to call this a front group.

Carla Cremer insists that EAs are a “coordinated community”. For those of us in Labour who might be unfamiliar with the world of Effective Altruism, she notes that we can expect Labour for the Long Term to “share ideas, interests and worldviews with the wider EA ecosystem”, and to potentially be coordinating with other members of EA. To be able to work with them in a clear-eyed, useful manner, Labour members need, she says, “an understanding of not only where the money is coming from, but also where the ideology is coming from, and what it is tied to”.

It is clear that those running Labour for the Long Term are sincere in their ethical commitments; clear also that the organisation is composed of people holding more moderate positions within Effective Altruism and Longtermism. Sincerity and comparative moderation should not, however, be taken as a blank cheque for political operation. Labour is a democratic party; Effective Altruism is not a democratic movement, and it is also clearly a community where the size of one’s financial stake increasingly dictates the size of one’s influence. While Labour should welcome new initiatives, it must also be aware of where such initiatives come from and which ideas inform them; both financially and ideologically, Longtermism comes from places that are far removed from social democracy.

Morgan Jones and David Klemperer are contributing editors for Renewal