8 Comments

Trevor - excellent essay. I've had similar concerns that AI alignment researchers are being quite naive about possible social/cultural backlash against AI. I'm working on a related essay.

You might be interested in the responses to this question I posed on Twitter Oct 10:

https://twitter.com/primalpoly/status/1579552298262736896

I think there are many possible applications of narrow AI over the next 20 years that would be so disgusting, alarming, distressing, and politically polarizing, that anti-AI activists could easily turn the public against AI....

Expand full comment
author

Interesting post! I'd be curious to hear similar answers if you asked, say, shoppers in a Brooklyn Whole Foods. I bet they'd be quicker to turn against AI.

Expand full comment

Yes. I think different groups will probably be upset, offended, or outraged by different applications of narrow AI (long before AGI is developed), and it's worth doing some research to understand these likely reaction-points. Brooklyn hipsters will have different reaction-points than Iranian mullahs, who will have different reaction-points than British anti-sex-bot campaigners.

Expand full comment

Can I summarize the key points of the argument to make sure I understand?

1) Nick Beckstead's subjective probability estimate of AGI before 2043 should go from 20% to below 2% specifically because a TMI or "becomes a military matter" incident is likely to happen before the technology comes to fruition (if in fact it does).

2) At least one of these two events is very likely to happen.

3) If it does, it is very likely to mean that AGI is not developed before 2043.

Since the punchline argues for a 90% reduction in the subjective probability, presumably you'd claim that #2 and #3 are actually really, really likely -- something like 95% likely that a relevant incident happens before AGI (if AGI does happen), and also that such an incident reduces the chance that AGI then happens before 2043 by 95% (or 90% in one slot and 100% in the other, or...). Otherwise, they shouldn't convince someone who believes in 20% to change their belief to below 2% (assuming they previously thought those things were 0% to happen; you have to argue higher if the 20%-believer previously thought them possible). Do you believe / argue for numbers like 95%; 95%? Higher?

(I work at the FTX Foundation, but not for the Future Fund, and my question-asking here is as a blog reader, not in any connection to the prizes.)

Expand full comment
author

Yup, that's accurate. Although, implicit in 1) is the assumption that Nick Beckstead should be entirely confident in all parts of his probability estimate except for the issues raised in this blog post. I accept that for the sake of this post, but I wouldn't agree with that overall.

I also have no idea whether Nick Beckstead previously thought these things were 0% possible, but he doesn't discuss them in the contest rules, so I'm operating off the assumption that he does.

Thanks for your comment! I think there could definitely be a fruitful discussion around the 95% likelihood, and I'd be interested to hear the other side of it.

Expand full comment

Hi Trevor! Thanks for contributing to our contest!

I wanted to drop in to see if I can explain myself better on the subject of "how you can assign an exact numeric probability to the probability of AGI being developed by 2043." I definitely agree that reporting my subjective probabilities about a roughly defined event is inexact and less scientific than what the IPCC is doing, and I don't consider what I'm encouraging people to do here science - I consider it something more like "thoughtfully explaining your bets."

It seems like there is a gulf of understanding between us, where I think of assigning a subjective probability on AGI timelines as a natural thing to do when trying to make sense of the world, and you (and I think many others) seem to consider it a kind of crazy and hubristic thing to do, which couldn't possibly be done exactly, and is perhaps an affront to the diligence, objectivity, and seriousness of the scientific enterprise. From my point of view, you're implicitly setting too high of a bar for people to consider themselves qualified to have any opinion at all on the question.

One thing that may be helpful here is to consider start-up valuations. Let's consider a pre-revenue start-up in a speculative industry, such as Helion Energy. Let's say the company is pre-revenue right now, and we wanted to have a discussion about what it's worth. Suppose I write up a few pages of thoughts and I came up with a valuation of $X based on guesses of size of total addressable market, odds they succeed and get cost/kWh below some threshold, odds they get regulatory approval, and expected market share conditional on getting there first.

You could come back to me and say, "I don't understand how you can assign an exact numeric valuation to Helion. It's a pre-revenue company with an unproven technology. You know when the IPCC uses subjective probabilities, they base that on mountains of evidence and detailed climate models. The models of the trajectory of fusion energy are much worse than our models for climate change. If you want to create scientific discussion of start-up valuations, then you should at least come in with clearly stated assumptions about the company!"

And if you did I would say: I don't know how to fully explain it and make my thinking fully reproducible by others, but there are some valuations where I'm a buyer and some valuations where I'm a seller, and some intermediate valuations. I think it's possible to make meaningful statements about this question using napkin math and without IPCC-level standards of rigor.

I believe something similar about forecasting AGI by 2043. Ultimately, we have to decide what bets they're going to take on this subject, we won't be able to do it as rigorously as the IPCC (and success probably looks more like "about as rigorous as a good VC making long-term bets"), but maybe we'll learn something if we try to explain our thinking and subject it to external scrutiny!

Expand full comment
author

Hi Nick, thanks for replying!

I understand where you're coming from. But startup valuations have to be based on inexact predictions because raising funds is time sensitive and not that many people can be privy to the information.

You guys have at least $1.5 million to spend on this and, even given the most aggressive/optimistic timelines, 20 years. Why not hire 20 postdocs to form a working group for 6 months and get this stuff on solid footing, similar to the IPCC?

At the very least, I think an achievable goal could be to make various lines of thinking fully reproducible. They don't even have to make predictions themselves, but just try to gather the assumptions that people are already making and make a model that can incorporate these assumptions. There doesn't have to be any proprietary information incorporated.

And also, what seems clear to me is that everyone's already happy to write long blog posts/forum posts about AI timelines. It's fun, at least for a certain variety of Internet nerd. I would have written this for free, or at least for a way smaller prize. But the hard work of hammering out exact assumptions and building a model is something that people probably need to get paid for, so that would be a good use of money, IMO.

Expand full comment

Quick thoughts on important guesswork numbers:

* I believe that time-sensitivity and non-public info are not enough to explain the difference with start-up investing. I believe that if people had a full year and access to all of the info used by Helion investors, the analysis would be far worse than the IPCC and probably not much more convincing that what you'd find in the cited sources on our worldview prize post.

* I do not believe that 20 postdocs and 6 months of work could get this on a solid footing, similar to the IPCC. If I did, we would absolutely organize that (and we might fund it anyway if someone had a serious proposal!).

* If we don't learn much more about this issue over the next 3 months than we do in a typical 3 month period, then that'll be evidence that the prize wasn't necessary to motivate people! I'm looking forward to finding out what happens.

Expand full comment