Discussion about this post

User's avatar
Geoffrey Miller's avatar

Trevor - excellent essay. I've had similar concerns that AI alignment researchers are being quite naive about possible social/cultural backlash against AI. I'm working on a related essay.

You might be interested in the responses to this question I posed on Twitter Oct 10:

https://twitter.com/primalpoly/status/1579552298262736896

I think there are many possible applications of narrow AI over the next 20 years that would be so disgusting, alarming, distressing, and politically polarizing, that anti-AI activists could easily turn the public against AI....

Expand full comment
Ross Rheingans-Yoo's avatar

Can I summarize the key points of the argument to make sure I understand?

1) Nick Beckstead's subjective probability estimate of AGI before 2043 should go from 20% to below 2% specifically because a TMI or "becomes a military matter" incident is likely to happen before the technology comes to fruition (if in fact it does).

2) At least one of these two events is very likely to happen.

3) If it does, it is very likely to mean that AGI is not developed before 2043.

Since the punchline argues for a 90% reduction in the subjective probability, presumably you'd claim that #2 and #3 are actually really, really likely -- something like 95% likely that a relevant incident happens before AGI (if AGI does happen), and also that such an incident reduces the chance that AGI then happens before 2043 by 95% (or 90% in one slot and 100% in the other, or...). Otherwise, they shouldn't convince someone who believes in 20% to change their belief to below 2% (assuming they previously thought those things were 0% to happen; you have to argue higher if the 20%-believer previously thought them possible). Do you believe / argue for numbers like 95%; 95%? Higher?

(I work at the FTX Foundation, but not for the Future Fund, and my question-asking here is as a blog reader, not in any connection to the prizes.)

Expand full comment
6 more comments...

No posts