Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and regular improvements. Please share your thoughts.
Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and regular improvements. Please share your thoughts.
Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea of a "fire alarm" for AI. He is a co-founder and research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.
George Hotz and Eliezer Yudkowsky hashed out their positions on AI safety.
It was a really fun debate. No promises but there might be a round 2 where we better hone in on the cruxes that we began to identify here.
Watch the livestreamed YouTube version (high quality video will be up next week).
Catch the Twitter stream.
Listen on Apple Podcasts, Spotify, or any other podcast platform.
Check back here in about 24 hours for the full transcript.
Click “Add to feed” on episodes, playlists, and people.
Listen online or via your podcast app.