Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea of a "fire alarm" for AI. He is a co-founder and research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.
All the podcast interviews and talks we can find on the internet. It’s “follow on Twitter”, but for audio. New episodes added often.
Curated by Alejandro Ortega.
A selection of writing from LessWrong and his papers
Playlists featuring Eliezer Yudkowsky, among other writers.
Click “Add to feed” on episodes, playlists, and people.
Listen online or via your podcast app.