Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and regular improvements. Please share your thoughts.
Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and regular improvements. Please share your thoughts.
Paul Cristiano runs the Alignment Research Center (ARC) and is a member of the UK Frontier AI Taskforce. He previously ran the LLM alignment team at OpenAI.
Why would advanced AI systems pose an existential risk, and what would it look like to develop safer systems? In this episode, I interview Paul Christiano about his views of how AI could be so dangerous, what bad AI scenarios could look like, and what he thinks about various techniques to reduce this risk.
Topics we discuss, and timestamps (due to mp3 compression, the timestamps may be tens of seconds off):
Paul's blog posts on AI alignment
Material that we mention:
Click “Add to feed” on episodes, playlists, and people.
Listen online or via your podcast app.