5 Comments
User's avatar
Jos T's avatar

Yeah, the things I learned and put in effort constructing last month to give ai agents greater utility and intelligence are all obsolete. smarter, better, more available tech emerged. This is happening very quickly.

Expand full comment
Performative Bafflement's avatar

100% agree - but I think that in the interim, there's going to be a pretty fun / productive time with human+AI "centaurs," because obviously everyone is going to be walking around with a superintelligent AI assistant in their ear 24/7 in just a few years, and it will be a huge boost to quality of life and capabilities for a while.

These AI minds will know everything about you, they’ll know your thinking style, they’ll know what rhetorical techniques you prefer, they’ll be talking to you in the ways that most resonate with you and making connections, arguments, and analogies accordingly. Super persuasion, but at the personal level, and for your benefit - a super-ego that works, in other words.

And I’m not pretending the AI is going to win all the time here, either. Your super-ego doesn’t win all the time today, does it? All it really needs to do is win more often on the margin. Think of it winning only 10% more - 10% better decisions compounded over days, weeks, years, and decades is a CRAZY big effect size. It’s like getting a 10% financial return that compounds weekly!

So imagine being able to level up on your career, health, and hobbies pretty significantly, and your friends and family able to do the same. It'll be pretty fun, and even when there's no more careers, there's still hobbies and reverting back to the low-work hunter-gathering social life that we adapted to over ~2M years historically.

I actually wrote a post about this that fleshes it out a bit more here. Being dogs in a post-scarcity future isn't that bad, because we'll still have intellectual and capability confreres in the other humans around.

https://performativebafflement.substack.com/p/the-spastic-yuppie-zombie-hoods-in

Expand full comment
Auros's avatar

Since long before AI started to really take off, I have often thought that the long-termist approach to policy should be about steering towards a Star Trek future -- fully automated luxury space communism -- and away from a Blade Runner future, or any of the wide variety of candidate Fermi Filters.

Expand full comment
Jos T's avatar

Building tech with these tools for tech people feels pointless. But there are plenty of others who would benefit from last month’s technology for the foreseeable future— like my dog’s groomer who still uses paper calendars and human calls for reminders.

Expand full comment
Microbia's avatar

I think there are 2 things:

(1) Don't you think our world is already incomprehensible? I was once at a disaster preparedness conference and some people pushed for the importance of scientists in rebuilding society. But over the course of the conference it became clear that professions today are so specialized, that almost none of us knows anything that will help us survive, especially scientists, who are hyper-specialized.

(2) Wouldn't making the world even more incomprehensible, be a choice? AI is still heavily reliant on human knowledge and thinking through the training process, that at least with the current methods there would still be some humans with an understanding of how it works. Is it not so?

Expand full comment