18 Comments

But Trevor, you’re only writing this to win one of the $20k grants for criticizing EA \s.

“The EA forums are filled with people talking to each other about AGI (soon to be a lot more of them with the new AI essay competition). But those aren’t the people who will influence the eventual direction of AGI. Those people presumably work at OpenAI, or Facebook, or Google, or some Chinese university.”

This is my main concern about EA: it over-indexes on people who spend a lot of time on the Internet relative to people who have the talent and work ethic to solve important problems and will ultimately be outcompeted by the leap-before-you-look crowd in ex-risk areas from AGI to creating artificial viruses.

Expand full comment

This was excellent! And I fully agree on the conversational echo chamber point. I also think paying people to think extra hard is insufficient to deal with real world problems, which requires a bit more hands on work, and iteration.

Expand full comment

So: I'm extremely biased as one of the people running the Clearer Thinking Regrant prediction competition on Manifold, but I think I disagree on a few key points here. Briefly, they are:

1. These prizes are unusually transparent rather than unusually bad/grifty

With Clearer Thinking Regrants, I think Spencer and his team, as well as the applicants who opted in to make their grant applications public, took an amount of risk and exposure to criticism that is extremely commendable: baring their hopes and dreams to the world at large, rather than keeping it a private application. I suspect that most other calls for grant applications lead to proposals that look like this, you just don't see it or hear about it because other grantmaking orgs are not in the habit of being nearly as transparent as the EA community is here. Most other essay contests I've seen also involve private submissions to a panel of judges, rather than putting all the applicants up for renewal.

2. The amounts of money involved are actually not that outrageous

This is some combination of "as an applicant, you generally ask for more than you need, because there's only room to negotiate downwards" and "how much do you think full time salaries cost?" It's a bit premature to criticize the projects for asking for a lot, when I would expect very few to none of them to get the full amount they're asking for. (You'll note that one of the key questions Clearer Thinking asks is specifically "What could you achieve with $30k", and most projects outlined what could be achieved with that much funding)

I think your own ability to run Highway Pharm on a shoestring budget is very impressive to me (one of the reason I invested! And more of you readers should invest too!), but overall other more-expensive or less-cost-effective interventions, might still be worth funding.

3. The people here don't seem grifty or rent-seeking to me personally

Look, I don't know any of these applicants. But I'm incredibly honored and enthusiastic about the time they seem to be willing to spend debating other random internet traders on Manifold. To pull out two examples from your list of grifts, the creators of "Better Debates" and "Business Skill Coaching" both showed up on Manifold and replied fairly extensively to the objections people had raised. It feels to me like these project ideas are their life's work, and they would be very excited to have the chance to pursue it full-time with the help of charitable funding.

I think assigning bad motivations to them would be a pretty big mistake, and would suspect that if you took 30m to chat with them you'd be more onboard/impressed with them as people. Whether that translates to the projects being good is a bit of a different question, of course!

4. If you don't fund a lot of bad-looking projects, you're being way too cautious (see: https://www.openphilanthropy.org/research/hits-based-giving/)

This is just kind of a general counterintuitive principle - but you really shouldn't be trying to maximize for "no grifts/rent-seekers", rather something like "maximize total EV". I agree there's bad second-order effects of turning into a movement that is full of people who are selfishly trying to capture value rather than create value, and it's possible EA is already at that point and on the margin should be funding in a more circumspect manner? But just wanted to put this reminder out there.

Also a lots of things I agree on from your post!

> I think the insane amounts of money they’re spending is changing EA to being a social movement from being one where you expect to give money, to being one where you expect to get money.

I think I'm less worried about "essay-writing" for this one, as it seems quite rare for people to move into "essay-writing" fulltime -- and much more worried about community building, where I think a lot of extremely talented people end up in by default. You don't see many of these applications in the Clearer Thinking Regrant competition because they're covered by a different branch of EA (mostly through CEA, I believe).

> But if a young EA person learns that they can earn $500k writing essays or $40k negotiating prices for mosquito nets, and that each is considered equally important by the authorities they respect, why would anyone ever choose mosquito nets?

I do think object-level work is extremely important and underfunded/underpaid atm due to bad norms from nonprofit sector -- but I think some of your earlier points undercut this. How often do you look at one of the eg 8 good applicants and suggest "hey, we should pay this person a higher salary, eg $200k to a fresh grad to be competitive with Google"? This basically never happens afaict, and insofar as you think the mosquito nets employee should be paid very well, I agree wholeheartedly.

> If I were FTX, I would change my funding model in a few ways:

FROs seem interesting to me! I do think FTX is actually very open to trying new and better funding models. I think it's actually quite easy to criticize but hard to create good ones -- it's a bit like politics or covid, where you can point to specific problems or inefficiencies and be quite right, but if you were put in charge of the entire ecosystem you would quickly realize how much there the decisionmakers process and deal with; I'm generally in awe of how competent and well-considered their team is

Expand full comment

Responding to your points:

Agreed that it's commendable to keep things transparent. Not sure if I can say unusual or usual levels of badness beyond just "worse than I would hope".

The amount of money being outrageous depends on how you look at it. $50k for promoting an app would be table stakes if I was a VC investing in the app. It's a lot if I'm an EA person trying to change the world. It's a lot of mosquito nets.

They have to respond to people to get the money, right? They're hoping to sway the prediction market, and the best way to do that is to reply to objections. I don't think that makes me feel one way or another about them, given the clear self-interest.

I think there are two types of "bad-looking projects". There are moonshots, and there are grifts. If open philanthropy funds 1000 incredibly ambitious projects, of course some large percentage will fail. However, if one of the ones that fails was run by the ghost of Bernie Madoff and it fails due to financial mismanagement, Open Philanthropy fucked up.

Expand full comment

I wish that you had not misdescribed a number of the projects in our forecasting tournament. And I find it disappointing that you did not invest more time in understanding the projects before criticizing them and implying that some of them are grifters.

Other than that, I do appreciate your feedback and perspective though.

-Spencer (of Clearer Thinking)

Expand full comment

If there are any you feel I misdescribed in this post, please let me know.

Expand full comment

See my updated response last week to your Reddit post. I think that when critiquing something publicly there is a responsibility to get the basic facts about it right, and ideally to take the time to understand it. You misdescribed a number of the projects showing that you did not take this time.

Expand full comment

"This already seems to be happening with AI alignment. The EA forums are filled with people talking to each other about AGI (soon to be a lot more of them with the new AI essay competition). But those aren’t the people who will influence the eventual direction of AGI. Those people presumably work at OpenAI, or Facebook, or Google, or some Chinese university.

And sure, maybe the goal is to change the minds of the people who are going to develop AGI. But a bunch of people talking inside baseball in endless forum threads isn’t the way to do so. That works about as well as the endless threads on Hacker News about “privacy” do to change Google’s mind on their advertising standards. It’s just a bunch of faceless Internet commenters talking to each other while being ignored by the people who make decisions."

This feels like a pretty inaccurate model. The problem in AI alignment/governance isn't that we have lots of great ideas about how to align AGI but the companies aren't listening to us so we just need to "change the minds of the people who are going to develop AGI". The problem is that we DON'T have great solutions and are in dire need of figuring out how to find better solutions.

If we actually have very good solutions to AI alignment (certainly at the level of "pip install alignment", but probably at much less well-defined levels as well), I expect persuading OpenAI or DeepMind to be pretty far away from the hard part? It's not like top AGI developers are particularly far away on the social graph from EA folks.

Expand full comment

One might argue that FTX Foundation has a particular moral duty to establish a fund to help out all of the people whose lives were ruined by falling for crypto scams: https://ez.substack.com/p/the-consequences-of-silence?sd=pf

Expand full comment

This post aged really interestingly...I'd be curious what your thoughts on it now are.

Expand full comment

In retrospect, FTX was spending money like they stole it. But yeah, I stand by it moreso now.

Expand full comment

You write with a bitter tone in both your Reddit post (https://www.reddit.com/r/slatestarcodex/comments/xprjul/is_it_me_or_are_the_proposed_clearer_thinking/) and in parts of this post, as well as misrepresenting information about the project finalists (as other commenters have pointed out). Other, previous criticisms of FTX's charitable spending have been more thoughtful, nuanced, and have not misrepresented information about how money is being spent; they also don't sound bitter in the way that this post does. If you don't mind me asking, did you ever apply to the Future Fund and/or to Clearer Thinking Regrants (or did you feel bad for missing out on applying, perhaps)?

Expand full comment

Trevor, I am an AI guy, a once upon a time researcher, and also an ex program manager from DARPA funding AI research. So I have thought for decades about this threat.

While at DARPA (ten years ago) I spoke with multiple field leaders about this threat. It was very much like talking with a gene researcher about genetically modified food risks in the 1990s. They acknowledged the risks in principle, but they viewed them as risks in the unknowable distant future, so not worthy of present concern. and of course they have a vested interest in this not being a concern, since it is also their life's work!

But I personally think it is of very large concern. My guess is that this has a much larger chance of being an existential threat to humanity as compared to all other threats combined. Nuke war + climate + bioterrorism, etc.

Further, because present AI is not configured in a way to bootstrap general intelligence its incremental outputs give the impression that achieving human intelligence and beyond is very far away. I have seen a real shift in sentiment based this last decade of deep learning results, but still most people and reserachers see this as a distant threat.

But I notice progress in AI is very step wise. certain capabilities remain out of reach, and then they are dominated within a handful of years. I suspect the bootstrapping of human reasoning and knowledge through interaction with other humans is a skill that will be solved in such a stepwise fashion. But the outcomes from that step will be quite profound. It will still take years, but the genie will be out of the bottle. The googles and governments of the world will spend billions at a furious rate to be the first to scale such a technology. In practice raising our first generation of AI-babies will happen quickly. And there will be no going back.... and I think there will ultimately be no real ability to manage this tech either.

So from where I sit. I see humanity spending FAR FAR FAR too little on this threat. When I look at the amounts we spend on the climate threat. our spending in this area (which I think is much greater existential threat) is ridiculously low.

I suspect you might retort: ok Dan, that might be true, but having folks talk insider baseball in some chat groups is not going to fix the problem.

Well I agree it wont fix the problem. Indeed, I liked your image of the little kid tapping the busy professional on the shoulder... there is truth in that image. Still, the issue right now is one of credibility. It just does not seem CREDIBLE that all of these fantastic abilities are close at hand, when AI planning systems can sometimes still have trouble stacking a few blocks on each other in some toy world!

But I disagree such insider-baseball discussions are not worth doing. You have likely heard of the "Overton window" right now the world is funding climate remediation 1000x or 100,000x more than AI safety simply because the overton window is shifted so far away from the side of concern.

There is only two things that will move that window. People talking at the far edges of the window and pulling the window over. Or shocking new AI capabilities convincing people this idea is credible. Alas that latter will arrive far too late.

Honestly, however, I suspect it is ALREADY too late. Not in a theoretical sense. Sure in theory we could stop advancing moore's law and limited the density of coordinated compute power, and we could stop all AI research. But in practice we are not going to do any of those things. Further in a multi-polar world, we have seen that both corporate and especially state players are very willing to take existential risks if it means securing decisive advantage over their competitors.... and the ability to bootstrap human reasoning will afford collossal advantage. I see no way it could be indefinitely resisted.

Still I obviously cannot see everything. So it is still prudent to investigate... maybe there will be better ways to approach this inevitable future. I don't know.

But what I do know is that if one assigns any non-zero probability to the future I imagine, then there is no way even poorly spent money in that direction will be assess as "not worth it"

we should be ridiculously scared.... but we are not... because "common sense" tells us it just cannot be that way

And I agree with another poster here... I have spoken to the guys are MIRI, one of the places that are DEFINITELY talking insider baseball... they are very very known to the open AI guys, the google guys etc.

Expand full comment

You seem to be equating the magnitude of the problem with the need to spend money on it. That's where we differ. As an executive, haven't you ever seen spending money be counterproductive? For example, have you ever seen a scenario where a small team makes a lot of progress on their own, gets the attention of the higher ups who shower it with money and people, and then the whole thing slows down in a mire of bureaucracy?

Expand full comment

Your response tells me that you don't viscerally really believe in this threat (As most don't. Indeed, even I have trouble believing emotionally what I see intellectually.)

Consider the man trapped in a room from which he will soon die. He stops asking questions about whether trying this door or flipping that leaver might work. He violently is doing ALL of those things... even ones he is quite certain will not work. He does those too. And that is the correct response. After all, what is the point of conserving energy to expend on other objectives when soon, no other objectives will matter?

Unlike climate, or nukes, or bio terror.... this puppy really CAN end us all. every last one of us.

Intellectually I see no way to stop our progress.

Intellectually I see no way to put the genie back once we are close.

Intellectually I see no way humanity remains in control of what happens afterwards.

you know.... kinda like the guy in the room...

If all of what I just wrote is even plausibly true. Then math probabilities, opportunity cost, and chances of productive action, etc. should all go out the window, and one should be grasping at any lever in sight....

Instead we are slumbering... the overton window has not progressed far enough for us to emotionally believe it could really be true.

Expand full comment

Skynet meets Geeks/Mops/Sociopaths vibes

Expand full comment

Great piece. I also wonder to what extent what you identified also reflects a bias of so many of ftx staff/ea world in general having philosophy backgrounds who, esp if they haven’t done much outside of academia, have a hard time imagining what else can be impactful

Expand full comment

I think you overstate the risk. If you have short timeline you *need* to get money out of the door and yes, some will be wasted. But some will be spent really well. Also, I think we'll look back and think most of this money was well spent by all involved.

I think it's too cheap to say "EA is spending money too quickly" and too expensive to say "EA is not spending money fast enough"

Expand full comment