TYPE III AUDIOTYPE III AUDIO
PlaylistsPeoplePodcastsRequest a narrationBlog

Footer

Services

  • Narrate your writing
  • Blog to podcast
  • Newsletter to podcast
  • Embed audio on your website

Company

  • About us
  • Blog
  • Contact us
  • Terms & Privacy
@type3audio

Subscribe to our newsletter

The latest news, articles, and resources, sent to your inbox weekly.

@type3audio

© 2020 Your Company, Inc. All rights reserved.

Homearrow rightPeoplearrow rightEliezer Yudkowsky
Eliezer Yudkowsky portrait

Eliezer Yudkowsky

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea of a "fire alarm" for AI. He is a co-founder and research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

Links:

Twitter

All the podcast interviews and talks we can find on the internet. It’s “follow on Twitter”, but for audio. New episodes added often.

Add to my feed

Who is Eliezer Yudkowsky? Decoding the Mind Behind AI Extinction

30 November 2023 · THE AI REVOLUTION

Malcolm Got in a Heated Argument with Eliezer Yudkowsky at a Party (Recounting an AI Safety Debate)

29 September 2023 · Based Camp | Simone & Malcolm Collins

George Hotz vs Eliezer Yudkowsky AI Safety Debate

17 August 2023 · Dwarkesh Podcast (Lunar Society formerly)

Can We Stop the AI Apocalypse? | Eliezer Yudkowsky

13 July 2023 · Hold These Truths with Dan Crenshaw

Curated by Alejandro Ortega.

A selection of writing from LessWrong and his papers

Add to my feed

"AGI Ruin: A List of Lethalities" by Eliezer Yudkowsky

11 May 2023 · AI Safety Fundamentals: Alignment 201

Pausing AI Developments Isn’t Enough. We Need to Shut it All Down by Eliezer Yudkowsky

29 March 2023 · jacquesthibs (forum.effectivealtruism.org)

"Introduction to Logical Decision Theory for Computer Scientists" by Eliezer Yudkowsky

13 May 2023 · AI Safety Fundamentals: Alignment 201

"GPTs are Predictors, not Imitators" by Eliezer Yudkowsky

LessWrong

LessWrong (Curated) podcast cover image

Playlists featuring Eliezer Yudkowsky, among other writers.

Alignment blog posts

Selected posts from LessWrong and the Alignment Forum.

Could AI cause human extinction? An introduction

Arguments for and against.

The debate around bioanchors

Can we predict AI progress by looking at biological intelligence?

AI takeoff speeds

Should we expect there to be a sharp discontinuity in AI capabilities once they reach human-level?

The debate around AGI ruin: a list of lethalities

Eliezer Yudkowsky's "list of reasons why AGI will kill you".


All the narrations of their writing that we made or found on the internet.

Add to my feed

"Introduction to Logical Decision Theory for Computer Scientists"

13 May 2023 · AI Safety Fundamentals: Alignment 201

"AGI Ruin: A List of Lethalities"

11 May 2023 · AI Safety Fundamentals: Alignment 201

Click “Add to feed” on episodes, playlists, and people.

Listen online or via your podcast app.

your feed screenshot
podcast app screenshot