Effective Altruism, Longtermism, and democracy: an interview with Dr Luke Kemp

Morgan Jones and David Klemperer

Last month, Renewal published an article covering the launch of a group called ‘Labour for the Long Term’, whose stated aim is to lobby the Labour Party to adopt policy designed to counter long term risks. The piece explored the group’s origins in the ‘Effective Altruism’ movement, and in that movement’s embrace of a new philosophy called ‘Longtermism’, which emphasises the moral importance of trying to shape the distant future.

In its discussion of the pitfalls of Longtermism, the article drew on the work of Carla Zoe Cremer and Luke Kemp, two academic researchers close to the Effective Altruism movement who have spoken publicly about the dangerous ‘techno-utopian’ thinking underpinning Longtermism, and about the movement’s worrying reliance on billionaires. In a 2021 paper entitled ‘Democratizing Risk’, Cremer and Kemp had proposed a series of urgent reforms to the Effective Altruism movement, which went unheeded by the movement’s leaders.

In the weeks since Renewal’s article appeared, the relationship between Effective Altruism and its billionaire backers has become frontpage news. At the start of this month, the FTX crypto-currency exchange filed for bankruptcy amid revelations that its CEO, Sam Bankman-Fried, had been using customer deposits to pay off the debts of his trading company Alameda Research. Bankman-Fried and his companies were previously worth many billions of dollars, and, as well as being a major player in the nascent crypto-currency industry, were one of the leading funders behind Longtermism and the Effective Altruism movement.

To discuss Sam Bankman-Fried, the Effective Altruism movement, and the movement’s interactions with politics and democracy, Renewal sat down for a conversation with Luke Kemp. Dr Kemp is a Researcher at the Centre for the Study of Existential Risk and Research Associate with Darwin College at the University of Cambridge, working on climate policy, emerging technologies and catastrophic risk.

Renewal: How did you come to EA, and what first caused you to drift away from it? 

LK: I first got involved with the Effective Altruism community in Australia in 2017. It was initially because they were covering areas I was interested in, in particular existential and catastrophic risk. Before coming to the community, I had been campaigning for international equity in climate negotiations, and I was interested in questions of how we think better, so there was overlap from the very beginning. 

Despite that initial attraction, I found over time that there were far too many concerning aspects. In particular there was a weird hero-worship going on: I would often read certain key texts – from [Future of Humanity Institute Director] Nick Bostrom, or [EA co-founder] Will MacAskill – and I’d have critiques, because they were insufficient in a whole bunch of areas. These were inevitably met with a defensive posture. They seemed to have an inability, or unwillingness, to think critically about the key ideas within EA. 

I drifted away for some time, but at the end of 2018 I was more or less forced to become re-involved with EA. It is almost impossible to be in the catastrophic risk space without interacting with EA. They are the primary funders within the space, and they also constitute the dominant paradigm in the study of catastrophic risk: what Carla Zoe Cremer and I have called the ‘techno-utopian approach’.  Because of this I have spent a lot of time over the past four years in talking to EAs, presenting my work to them, and of course reading a copious amount of their research and thinking – often against my own better judgement. 

Renewal: What is the role of Will MacAskill in this culture hero-worship?

LK: MacAskill is the figurehead. He is, as the New Yorker said, the ‘Messiah’. I don’t agree with the New Yorker description of him as the ‘Reluctant Messiah’. I think he is enthusiastic to act as the figurehead and leader of EA. It is natural for movements to adopt some kind of symbolic leader. But it’s problematic when that leader starts to concentrate too many different forms of power under them. In MacAskill’s case, not only is he the ideological and public-facing brand of EA, but he also has a huge amount of both financial and political power. He holds positions in a huge number of EA-aligned institutions, including both the Centre for Effective Altruism and 80,000 Hours (which he co-founded), along with the Future of Humanity Institute, the Global Priorities Institute (GPI), the Forethought Foundation, Longview Philanthropy, and, until recently, the FTX Future Fund.  In those roles he has some degree of political power, as well as a great degree of financial power over who gets funding. That is problematic. 

MacAskill has spoken against hero worship in EA, and about how he wants to minimise cult-like practices, yet he has done precious little to act on it. Last year he was saying “I don’t want the hero worship”, and the obvious thing that was proposed to him was “don’t be the guy who hogs the limelight and presents at EAG” [Effective Altruism Global – the main EA conference]. Let other young and neglected scholars in EA be the centrepiece. Of course, at the next EAG, MacAskill jets in as the main presenter. Following that we have MacAskill’s book, millions of dollars spent on promoting it, and his face on the front of the New Yorker and the New York Times. MacAskill has willingly become the symbolic head of the movement, and also a central authority in making financial and political decisions. 

Renewal: Is MacAskill behind the shift within the EA movement to ‘Longtermism’?

LK: MacAskill has become the public face of Longtermism, but he’s not alone. There is also [EA co-founder] Toby Ord, Nick Bostrom, and, of course, the funders. Indeed, I think one of the key reasons why you see this turn and shift in EA away from their initial priorities of charity evaluation and global development towards Longtermism, is not just because people like MacAskill put forward a persuasive case, but because it echoes the interests of key donors – including [billionaire Facebook co-founder] Dustin Moskovitz and, until recently, Sam Bankman-Fried. These donors have an ideology – the ‘Californian Ideology’ – that meshes with Longtermism. 

Longtermism is not just this idea that the future could be vast and that we have to protect it. It is the idea that the future is vast because of economic and technological growth, and that while we want to protect it, our actions are limited due to technological determinism. We can do things like research into aligning Artificial Intelligence, but we can’t impose moratoriums on developing it. I think its unquestionable that if Will was the same person, with the same background, and rhetorical skills, but he was pushing for moratoriums on facial recognition technology and lethal autonomous weapons, and arguing for degrowth rather than avoiding technological stagnation, he would not be the recipient of billionaire backing. He would not be the face of EA and Longterism. 

That’s the key to all of this – it’s not simply about the ideas of the movement, but the fact that they amplify what the donors want. 

Renewal: Why do Longtermists focus so little on the risks to humanity posed by climate change?

LK: I think their neglect of climate change comes down to three things. One is a simple lack of expertise: Will MacAskill and Toby Ord are moral philosophers, not scientists. Second, the EA or ‘techno-utopian’ approach to thinking about risk is ill-suited for thinking about climate change, and indeed for thinking about risk generally. They only think about individual hazards, rather than knock on effects, cascades, and all the other concepts that have been developed in the study of risk over the last two decades. Third, is simply ideology. If you believe that we are going to develop a super intelligent algorithm in the next few decades, and that we are likely to be leaving earth soon thereafter, then of course climate change does not seem threatening. When I think about climate change becoming extinction level risk, that is going to take centuries, if not millennia. To the Longtermist, that is the techno-utopian, we will likely have technologies by then which are capable of not only reversing climate change but also terraforming planets. Hence there is no reason to be worried about climate change. Technology will save us. 

Renewal: What does the rise and fall of Sam Bankman-Fried tell us about EA?

LK: His rise and fall certainly tells us the pitfalls of ‘earn to give’. Before he met Will MacAskill, Sam Bankman-Fried was a vegan who wanted to work directly for an animal welfare charity. MacAskill and others convinced him to do earning to give. Bankman-Fried, from most accounts, already seemed to have some characteristics that made him particularly vulnerable to taking ‘earn to give’ to its logical conclusion. He was a hardcore Benthamite utilitarian, and he had a high risk appetite. I think this was unfortunately brought out in him when he started doing ‘earn to give’, which essentially gives you a useful rationale for taking risky bets if the payoff is for the community is big enough. Yes, that could include fraudulent activity. Indeed, in the grand cosmic scheme, how much should one really worry about laws which are particular, contingent rules based on one slither of time? Your ‘expected value’ calculation trumps such considerations. EAs do, of course, talk about integrity and having moral safeguards, but it is very unclear where they draw their moral lines and why. I have yet to have a conversation with an EA where they express scepticism about using crypto – which is of very questionable social value – to make money. There is the difficult question of why they are appalled at fraud, but seemingly nonchalant about Bankman-Fried’s other questionable practices.

Renewal: Do you think leading EAs like Will MacAskill mind that Sam Bankman-Fried appears to have done fraud? Or do they just mind that he got caught? 

LK: That’s a genuinely good question, I honestly don’t know. I’d like to think that Will and others didn’t know about this, and that the idea that this resulted in many people being harmed is abhorrent to them. But at the same time it is hard to ignore that Will and others were friends with Sam for nine years, and had very close contact with him. Will was doing things like putting him in touch with Elon Musk to buy Twitter, and even personally vouching for his character to Musk. You would hope that they would have done their due diligence on this person and his business activities before taking such actions. 

I do think they mind that Sam’s activities led to many people being harmed.  I think most EAs are genuine about that. I’m not sure how much they mind Sam doing questionable things in order to get more money for the movement. And I think that’s quite clear with the fact that no one seemed to have any issues previously with Sam lobbying Congress on crypto regulations. No one was concerned with Sam running ads for his business at the Super Bowl. No one was phased with Sam making his fortune out of crypto, which, once again, is of very questionable social value and has a high base-rate of scams.

Renewal: What impact did Bankman-Fried have on the movement?

LK: I think he accelerated pre-existing trends in the movement. These included having a high risk appetite and an interest in policy influence. When you look at Sam’s interview with [EA career-advice podcast] 80,000 Hours, the title is literally ‘taking a high risk approach to crypto and doing good’. Something I’ve always found fascinating is that for a community so obsessed with tail-risk, their way of thinking about their own donations is almost always framed in venture capitalist terms. It makes sense; the actual forefathers of their movement are venture capitalists, not risk managers. Hence, they are willing to take high-risk approaches to donating money and having influence. That’s something Sam did himself, but also encouraged within the movement.

I think alongside that is expanding policy influence. In that 80,000 Hours episode, Sam directly expressed his belief that politics and policy influence were neglected and underrated in EA. If you want to do good, then lobbying is a very effective leverage point. He is right of course. The problem is such an approach encourages a subversion of democracy. Even if you think you are doing good, which everyone does, you should not use your wealth to influence and distort politics. That is one the deepest, most corrosive problems in modern democracies. Regardless, Sam compelled the movement to increasingly embrace covert channels of influence.

Renewal: You talk about EA moving into politics. Has what started off as individual philanthropy become about achieving structural change through institutions?

LK: Yes, and it makes sense. It’s a logical conclusion from their basic axioms. If you want to ‘do the most good possible’, that ultimately will involve the exercise of power. And the best way to obtain power is capturing the state. EAs are often ruthlessly calculating and obsessed with expected value and interventions with the highest marginal return. Entering into politics is a rational move for them.

Renewal: What are the main methods EAs are using to ‘capture the state’ as you say?

LK: They set up and fund bodies to channel EA expertise into policy-making institutions. A good example is the Centre for Long Term Resilience in the UK, founded by two former public servants, both of whom were converted to EA by Toby Ord, and then received EA funding to set up their own think tank which provides interface between EA and the UK government. Another example would be the Centre for Security and Emerging Technologies (CSET) at Georgetown University in Washington DC, which was started with a 55 million dollar grant from [EA funding organisation] Open Philanthropy, and which was previously headed by Jason Matheny, who was formerly head of IARPA, the research wing of the US intelligence services. It is now headed by Helen Toner, previously of Open Philanthropy. CSET has already had a large number of its people go directly into the Pentagon and the White House. Most of their publications are policy briefs designed to influence US policy. Despite being at a university, they publish very little peer-reviewed research. 

They also try to get the leaders of EA directly into policy circles. Toby Ord, since the publication of his book, has had privileged access to the office of the UN Secretary General. I don’t think Toby has gotten that himself – it has been facilitated by the EA hierarchy. Last but not least is a longer game, where they try to convert as many students as possible at elite institutions, and then channel them into public office and positions of power. Indeed, this is why they are targeting ivy-league institutions rather than lower-prestige universities. This is quite clear if you look at the 80,000 Hours podcast, where they have done ‘expected value’ calculations about running for office, and even running for prime minister. If you have a background of graduating from Cambridge or Oxford university with the right degree, then the odds are surprisingly favourable. I think this is potentially the most dangerous, and most long-term, strategy that they are taking.

Renewal: What do you think the relationship is between the Effective Altruism movement and democracy?

LK: I think there is a clear clash. My deepest concern with EA is that there is a streak of authoritarianism to it. And this is where the hero worship comes from. Both politically and culturally within the community, there is a very strong deference to authority, that you accept what the leaders say, and that you trust the hierarchy. And this is why the reforms that Carla Zoe Cremer and I put forward never got taken up. Most of the community trust what happens at CEA [the Centre for Effective Altruism], at Open Philanthropy, or at FTX. They don’t demand democratic control or transparency as they should.

Ultimately most EAs, at least within the leadership, are not democrats but ‘epistocrats’. They believe that the world should be managed and run by those who are most knowledgeable. An enlightened technocracy, like themselves. When you combine that with their streak of rationalism – their belief that there are certain ways of thinking better, and thinking more rationally – you start to get an epistemic supremacy complex. They believe that there is right way of thinking, and that the people who know how to think right should be in charge. And, lo and behold, those people are EAs. 

Renewal: What should Renewal readers and Labour Party members be aware of when dealing with EAs?

LK: Be aware that EA is, I believe, by-and-large a form of long-range regulatory capture. Billionaires are supporting and funding the ideas of academics who best amplify their own ideology and interests. And not only that, but they are directly trying to funnel those ideas and policy recommendations into public institutions – both international and domestic. Personally, I have issues with billionaires being the ones to decide who is the public face of certain fields, and what approaches governments take to mitigating some the biggest challenges we face. That should be decided from the bottom up, not from on high by people like Dustin Moskovitz or Sam Bankman-Fried and their selected academic voices. 

Renewal: Is there nonetheless space for people on the Left to work with EAs?

LK: EA in its most basic form of trying to use certain kinds of reasoning, quantification, and evidence to do good – especially on near-term issues – is wonderful. For instance, I personally use [EA-created charity-evaluation tool] GiveWell. EA has had immense positive impacts. I would even say that it has, to date, maybe even been net beneficial. Channelling funding towards reliable interventions such as direct cash transfers, bed-nets, and de-worming. And it has also started some important conversations, including around existential and catastrophic risk. 

If EA can be reformed – democratised, diversified, made more transparent, and less reliant on billionaire funding – then I think it could be a wonderful source for change. Not just for Labour, but for the wider world. A community of bright and intelligent young people trying to do the most good they can could be a powerful benevolent force, as long as it is tempered by humility, virtue, critical self-reflection, and respect for democracy. Until then, I think you have a lot of reasons to be cautious. An unreformed EA is a precarious prospect.

Renewal: What would you say was the natural position of EAs on the Left-Right spectrum?

LK: It depends how you define Left and Right. EAs are a politically strange bunch. When it comes to social issues, they are often left leaning. When it comes to economic policy, they are actually far more right leaning. Despite their emphasis on philanthropy, they do not have any issues with billionaires per se, or with how billionaires make their money. 

I think a big issue is that EA is inherently conservative and status-quo biased. And this is because of their ‘Neglectedness, Tractability, Impact’ framework. This was first proposed by Will MacAskill in his first book, Doing Good Better. In particular, ‘tractability’, which is the idea that you should do interventions that are most tractable, most feasible, and the most likely to work on a low-cost basis. I think this tends to push you towards a certain kind of moral cowardice. You are unlikely to challenge power, or do anything too radical, because that’s not going to be expedient. The title of Chapter 8 of MacAskill’s Doing Good Better is literally ‘the moral case for sweatshop goods’. The argument is that sweatshop labour is preferable to the workers being unemployed, so don’t engage in boycotts. We are better off working on the margins to improve conditions than in challenging the industry of sweatshop labour.

It is funny that MacAskill in his latest book brings up examples of moral movements for causes like women’s suffrage or the abolition of slavery, because EAs would not have been advocating for either of those at the time. Feminism, abolitionists and civil rights were all highly confrontational movements. They were not looking for the most tractable, piecemeal reforms. They  were challenging power structures. Even if they do have an intention to redistribute resources, EAs do not want to challenge power. I do not think that is really compatible with being left leaning, because ultimately redistribution and equality are going to require confronting power.