TYPE III AUDIOTYPE III AUDIO
PlaylistsPeoplePodcastsRequest a narrationBlog

Footer

Services

  • Narrate your writing
  • Blog to podcast
  • Newsletter to podcast
  • Embed audio on your website

Company

  • About us
  • Blog
  • Contact us
  • Terms & Privacy
@type3audio

Subscribe to our newsletter

The latest news, articles, and resources, sent to your inbox weekly.

@type3audio

© 2020 Your Company, Inc. All rights reserved.

Homearrow rightPeoplearrow rightPaul Christiano
Paul Christiano portrait

Paul Christiano

Paul Cristiano runs the Alignment Research Center (ARC) and is a member of the UK Frontier AI Taskforce. He previously ran the LLM alignment team at OpenAI.

Links:

Website

All the podcast interviews and talks we can find on the internet. It’s “follow on Twitter”, but for audio. New episodes added often.

Add to my feed

Preventing an AI Takeover

31 October 2023 · Dwarkesh Podcast

Paul Christiano's views on "doom" (ft. Robert Miles)

29 September 2023 · The Inside View

168 - How to Solve AI Alignment

Bankless

12 - AI Existential Risk

1 December 2021 · Axrp The Ai Xrisk Research Podcast

Curated by Alejandro Ortega.

Narrations of Paul Christiano's papers and posts on the Alignment Forum, LessWrong, and his blogs

Add to my feed

“Where I agree and disagree with Eliezer” by Paul Christiano

12 May 2023 · AI Safety Fundamentals: Alignment

"What failure looks like" by Paul Christiano

LessWrong

"Three impacts of machine intelligence" by Paul Christiano

paulfchristiano.medium.com

"The easy goal inference problem is still hard" by Paul Christiano

11 May 2023 · AI Safety Fundamentals: Alignment 201

Playlists featuring Paul Christiano, among other writers.

Could AI cause human extinction? An introduction

Arguments for and against.

AI takeoff speeds

Should we expect there to be a sharp discontinuity in AI capabilities once they reach human-level?

The debate around AGI ruin: a list of lethalities

Eliezer Yudkowsky's "list of reasons why AGI will kill you".


All the narrations of their writing that we made or found on the internet.

Add to my feed

“AI safety via debate” by Geoffrey Irving, Paul Christiano and Dario Amodei

12 May 2023 · AI Safety Fundamentals: Alignment

“Where I agree and disagree with Eliezer”

12 May 2023 · AI Safety Fundamentals: Alignment

“Learning from human preferences” (Blog Post) by Dario Amodei, Paul Christiano & Alex Ray

Blue Dot Impact

"The easy goal inference problem is still hard"

Blue Dot Impact

Click “Add to feed” on episodes, playlists, and people.

Listen online or via your podcast app.

your feed screenshot
podcast app screenshot