The FTX Future Fund needs to slow its charitable spending way down
Alt title: I'm just a guy, standing in front of a multibillion dollar, crypto-funded, longtermist organization, asking them to please think about the second order consequences of philanthropy
I wrote this post in response to a growing trend in the EA community that I’ve seen, where getting involved in EA has become a great way to get a large pot of money. This post is about how that’s bad.
On a separate, personal note, my crowdinvesting push for my feline autoimmune drug is just $10k off of its minimum of $100k. If you care about feline autoimmune disease (i.e. feline stomatitis), human autoimmune disease, or human neurodegenerative disease, click the link.
In AstralCodexTen’s last Open Thread, Scott posted two updates from the Effective Altruist (EA) community at large.
The first was an opportunity to win up to $500,000 to change the Future Fund’s opinion about AI risk. The Future Fund is spending millions of dollars per year on the assumption that artificial general intelligence is going to be a serious threat to the human race within the next 50 years, and they want people to convince them to change their opinions one way or another.
The second was a prediction market tournament for an EA regranting project through an organization called Clearer Thinking. A prediction market is a stock market for the outcome of an event in the future, in case you’re unfamiliar. In this case, it’s a prediction market to predict what projects the Clearer Thinking team will fund. There are cash prizes for getting the predictions right, as well as for providing helpful information to sway the judges one way or another. The cash prizes were up to $13000, while the regranting was for up to $500,000 for projects that would “significantly improve the future”.
Interestingly, both of these were funded by the FTX Future Fund, the crypto-funded EA organization. The FTX Future Fund is insanely well-funded thanks to its crypto billionaire founder, Sam Bankman-Fried, and is rapidly deploying huge amounts of capital towards solving what they consider to be existential risks to humanity.
This is admirable, and I appreciate where FTX’s heart is at. That being said, I’m writing this post to ask them to please slow their roll and drastically reduce their charitable output, at least until they get a better handle on how to spend it wisely.
Now, I know saying this makes me sound like some sort of character from an Ayn Rand novel, and that I’m about to advise FTX Future Fund to just build a vault in the Arctic for the rich as the rest of the world starves. I’m not and I don’t believe that (although I do think that would be a cooler project than yet another megayacht, if Jeff Bezos is reading this).
I’m actually in agreement with the impetus of FTX Future Fund’s focus. I do think there are serious problems that can and should be solved with money, and that catastrophic risks are one of those problems. The fact that they’re putting so much time and effort into this is, again, admirable!
But, ironically, I don’t think they’re thinking about the longterm effects of the insane amounts of money they’re doling out. Specifically, I don’t just think they risk the amounts of money they’re spending being wasted, I think they’re risking them being actively counterproductive. And I think those two updates are perfect examples of that.
First, I think the insane amounts of money they’re spending is changing EA to being a social movement from being one where you expect to give money, to being one where you expect to get money. When I introduce people to EA, I always start with Peter Singer’s famous thought experiment that it’s as immoral to ignore people starving in a foreign country as it would be to ignore someone drowning next to you. I don’t expect it to resonate with everyone, but for those who it does resonate with, it resonates with them strongly. Those tend to be good people, in my experience.
I don’t start with, “EA is a loose knit coalition of insanely wealthy nonprofits in which you can earn huge amounts of money by writing convincing essays about ways in which AI might end humanity.” There are only two sorts of people who this would attract: people who are already incredibly worried about AI, and people who will pretend to believe anything to earn a lot of money. The latter way outnumber the former, but they are very good at pretending to be the former.
This influx of people who are into EA because they really like being around money and what it can buy (and hope to get some themselves) is already happening. I recently wrote a Reddit post about the Clearer Thinking regrants. These were, for the record, supposed to be “altruistic efforts that could greatly improve the world”, according to the official submission guidelines.
According to Clearer Thinking, they received 630 applications, and winnowed them down to 37, 28 of which agreed to be reviewed publicly in the predictions market. So, these 28 should presumably be the best of the best, right? Shining examples of altruistic efforts that, if implemented, would be truly incredible achievements for the world at large?
Well, as I reviewed in the Reddit post, they weren’t. At all. I mean, some of them seemed good (about 8, by my count). And some were just misguided, like a guy who was convinced that nobody had explored the societal impacts of nuclear winter before or a team that wanted to build malicious AI to form countermeasures against it.
But a fair number just felt like grifts, or people who saw a lot of money up for grabs and thought it’d be cool to have some for themselves. They wanted $40k for their own company to deliver business coaching to EA organizations, $150k to personally go harangue regulators about AI,or $50k to promote their debate app.
Even worse, a number of them were just looking to set up their own regranting. They wanted to take the money from FTX to pay themselves a salary and then just hire people to do something. This came in a wide variety of forms, from $500k to hire behavioral scientists to research behavioral science, to $100k to hire a researcher to find ways to reduce animal suffering, to an unclear amount of money to just give grants to interesting people and then hang out with them.
This last group, the rent-seekers, is who really worries me, even more than the grifters. These aren’t people who are capable of solving a problem themselves or are really interested in it. These are people who see a large sum of money and think that they can appropriate some of it to make themselves wealthy and have some degree of status. If they can convince Clearer Thinking into thinking they’ve identified a problem, they hope that they will personally be able to direct and control the people who can actually solve the problem.
The fact that the FTX pot of money is already attracting these people is worrying me. The misguided can be educated and the grifters can be ignored, but the rent-seekers will actively try to work their way into an organization in order to gain power. And once they do, they are very hard to extract out.
And that brings me to my second big problem. Large spigots of money end up requiring bureaucracy to direct them into the appropriate places. That’s natural, and unavoidable as more money pours into EA. However, things get bad when there’s nowhere for the money to go, and it just accumulates as it’s waiting to be spent.
This is bad because it makes fighting the Iron Law of Bureaucracy much more difficult. The Iron Law of Bureaucracy teaches that people that form a bureaucracy will necessarily come in one of two forms: people who care about the mission, and people who care about the bureaucracy. People who care about bureaucracy for its own sake have a natural advantage in any bureaucracy, and will slowly take over any organization if left unchecked until they’ve completely replaced the mission-focused people. The only way to check this is by constantly staying vigilant for the bureaucracy to be controlled by the mission-focused.
But, if there’s no mission to be focused on for some large part of the bureaucracy, because they’re just waiting for the money to be directed, you’re already weighing the organization towards bureaucracy-loving bureaucrats. All money needs bureaucrats to organize it. Pots of money with no mission attached to them will only attract bureaucrats who just like controlling money.
And these people get attracted to accumulated money like flies to rotting meat. It’s difficult for mission-oriented people to understand the attraction, but bureaucrats just love being in control of resources for their own sake. That’s why people will jump on each other to get put on zoning boards, just so they can stop buildings being built.
In the same way, these bureaucracy-loving bureaucrats will actively prevent work being done without their say-so. They have to in order to justify their position. Every single cause that a good EA person cares about, from AGI to zoonotic diseases, will now have someone sitting on top of a pile of money demanding that everything goes through them first.
The grant appliers from above are perfect examples of that. They don’t dream of fixing animal welfare themselves, or becoming behavioral scientists themselves. They dream of sitting on a pile of money that’s supposed to be used for behavioral science. They can make a bunch of behavioral scientists jump through hoops to get to use some small portion of the money. Then, they can attend EA conferences and demand to have a say in how larger organizations spend their money on behavioral science, because now they’re somehow experts after making people jump through hoops. And then, in their wildest dreams, they can discourage FTX from funding any other behavioral science, because that intrudes on their territory. They will be the sole arbiters of behavioral science, or animal welfare, or nuclear disarmament, or whatever other fiefdom they’ve managed to appropriate.
Of course, because this is EA, all of this will be accompanied by thousands of words of verbiage. And that’s the third big problem with FTX’s push. When you start giving out millions of dollars and ask for it to be used ASAP, there will be very few “shovel-ready” projects that can use the money. Concrete ideas are difficult to implement, take time to plan, and need and use money in stages.
You know what doesn’t take time to plan and can use a lot of money quickly? Hiring a bunch of people to write think pieces. It even scales up with the amount of money. If it takes a year for 100 people to build a bridge, twice as many people probably won’t build it in 6 months. However, if it takes a month for 5 people to produce 50,000 words, twice as many people can produce 50,000 words in roughly half the time, ignoring quality issues. If all you’re looking for is to pay for output, paying a lot of people to write a lot of bullshit is an easy way to do so.
Of course, I’m not necessarily saying that it’s bad to pay people to write stuff. However, I think it’s really easy to have an illusion of progress because there are a lot of people publishing stuff, when in fact they’re just writing to each other.
This already seems to be happening with AI alignment. The EA forums are filled with people talking to each other about AGI (soon to be a lot more of them with the new AI essay competition). But those aren’t the people who will influence the eventual direction of AGI. Those people presumably work at OpenAI, or Facebook, or Google, or some Chinese university.
And sure, maybe the goal is to change the minds of the people who are going to develop AGI. But a bunch of people talking inside baseball in endless forum threads isn’t the way to do so. That works about as well as the endless threads on Hacker News about “privacy” do to change Google’s mind on their advertising standards. It’s just a bunch of faceless Internet commenters talking to each other while being ignored by the people who make decisions.
Except, in this case, the faceless Internet commenters are being subsidized by billions, so they write extra long comments that seem extra official. But they’re still not doing anything. The shape of the future is not changed one bit by them clicking post or not.
I’m concerned that this is drawing talent and attention away from important, hard work that requires a large amount of time. FTX is paying more for these essays than the average person makes in 10 years, and they’re couching it in terms of “this is a meaningful thing for someone who cares about EA to do”. Effective altruists are humans, and many EA people in low-paying jobs derive benefit from the esteem that their jobs are held in the community. But if a young EA person learns that they can earn $500k writing essays or $40k negotiating prices for mosquito nets, and that each is considered equally important by the authorities they respect, why would anyone ever choose mosquito nets?
If I were FTX, I would change my funding model in a few ways:
1. I’d look into the FRO model. In a nutshell, specialized, pre-funded institutions with a predetermined lifespan and a definite mission. Once they reach the end of their predetermined lifespan or the end of their mission, they disband.
2. I’d make sure all regranters have a limited pot of money, specific types of projects that they can regrant to that they are qualified to judge, and a mission not to grant money to other regranters. There needs to be no financial incentive to give grants (e.g. a shrinking pot of money if not all the money is given out).
3. Awards for essays of all sorts need to be drastically reduced. This also includes the blogging awards.
At the end of the day, I know I’m just some guy, and I do expect to be ignored as the SS FTX Future Fund goes roaring off across the seas and I stay shouting on the shore. However, I write this missive in the blind hope that I can reach some captain before FTX, in its reckless enthusiasm, brings pernicious invasive species to the largely unspoiled EA lands. We will see.
But Trevor, you’re only writing this to win one of the $20k grants for criticizing EA \s.
“The EA forums are filled with people talking to each other about AGI (soon to be a lot more of them with the new AI essay competition). But those aren’t the people who will influence the eventual direction of AGI. Those people presumably work at OpenAI, or Facebook, or Google, or some Chinese university.”
This is my main concern about EA: it over-indexes on people who spend a lot of time on the Internet relative to people who have the talent and work ethic to solve important problems and will ultimately be outcompeted by the leap-before-you-look crowd in ex-risk areas from AGI to creating artificial viruses.
This was excellent! And I fully agree on the conversational echo chamber point. I also think paying people to think extra hard is insufficient to deal with real world problems, which requires a bit more hands on work, and iteration.