Goto

Collaborating Authors

 eliezer


Eliezer is still ridiculously optimistic about AI risk - LessWrong

#artificialintelligence

They actually take his arguments seriously. If I wanted to blow my life savings on some wretched crypto scam I'd certainly listen to these guys about what the best scam to fall for was. This is what it looks like when the great hero of humanity, who has always been remarkably genre-savvy, realises that the movie he's in is'Lovecraft-style Existential Cosmic Horror', rather than'Rationalist Harry Potter Fanfic'. All power to Eliezer for having had a go. What sort of fool gives up before he's actually lost?


Are You an AI Doomer?. We're all gonna die and other AI…

#artificialintelligence

I was recently recommended to watch an interview with Eliezer Yudkowsky created by YouTubers and all-round crypto smart guys David and Ryan from "Bankless", a crypto and blockchain education company. Here's the link if you have a spare two hours, it's an equally scary and fascinating watch: I used to obsessively watch Bankless videos back in 2021 during the last crypto/NFT boom, but since the bear market of 2022 set in, I kind of lost some of my mojo for crypto. Anyhow, this video, was a dramatic departure from the Bankless crew's regular weekly roundup of the crypto markets, where they get deep into the weeds of the latest developments in the space. Newsletter is a reader-supported publication. To receive new posts and support my work, consider becoming a subscriber.


OpenAI!

#artificialintelligence

I have some exciting news (for me, anyway). Starting next week, I'll be going on leave from UT Austin for one year,to work at OpenAI. They're the creators of the astonishing GPT-3 and DALL-E2, which have not only endlessly entertained me and my kids, but recalibrated my understanding of what, for better and worse, the world is going to look like for the rest of our lives. Working with an amazing team at OpenAI, including Jan Leike, John Schulman, and Ilya Sutskever, my job will be think about the theoretical foundations of AI safety and alignment. What, if anything, can computational complexity contribute to a principled understanding of how to get an AI to do what we want and not do what we don't want?


Discussion with Eliezer Yudkowsky on AGI interventions - Machine Intelligence Research Institute

#artificialintelligence

The following is a partially redacted and lightly edited transcript of a chat conversation about AGI between Eliezer Yudkowsky and a set of invitees in early September 2021. By default, all other participants are anonymized as "Anonymous". I think this Nate Soares quote (excerpted from Nate's response to a report by Joe Carlsmith) is a useful context-setting preface regarding timelines, which weren't discussed as much in the transcript: The gap between AI systems then and AI systems now seems pretty plausibly greater than the remaining gap, even before accounting the recent dramatic increase in the rate of progress, and potential future increases in rate-of-progress as it starts to feel within-grasp. But basically all that has fallen. The gap between us and AGI is made mostly of intangibles. Sure, but on my model, "good" versions of those are a hair's breadth away from full AGI already. And the fact that I need to clarify that "bad" versions don't count, speaks to my point that the only barriers people can name right now are intangibles.) That's a very uncomfortable place to be! But I'm in the second-to-last epistemic state, where I wouldn't feel all that shocked to learn that some group has reached the brink. Maybe I won't get that call for 10 years! But it could also be 2, and I wouldn't get to be indignant with reality. I wouldn't get to say "but all the following things should have happened first, before I made that observation". I have made those observations. For one thing, I don't expect to need human-level compute to get human-level intelligence, and for another I think there's a decent chance that insight and innovation have a big role to play, especially on 50 year timescales. There has been a lot of AI progress recently.