TYPE III AUDIOTYPE III AUDIO
PlaylistsPeoplePodcastsRequest a narrationBlog

Footer

Services

  • Narrate your writing
  • Blog to podcast
  • Newsletter to podcast
  • Embed audio on your website

Company

  • About us
  • Blog
  • Contact us
  • Terms & Privacy
@type3audio

Subscribe to our newsletter

The latest news, articles, and resources, sent to your inbox weekly.

@type3audio

© 2020 Your Company, Inc. All rights reserved.

Homearrow rightPeoplearrow rightRichard Ngo
Richard Ngo portrait

Richard Ngo

Richard Ngo works on the Governance team at OpenAI. He was previously a research engineer on the AGI safety team at DeepMind.

Links:

WebsiteTwitter

All the podcast interviews and talks we can find on the internet. It’s “follow on Twitter”, but for audio. New episodes added often.

Add to my feed

EP6: Is A.I. Going To Kill Us All? (On Richard Ngo’s A.I. Safety First Principles)

23 August 2023 · Academic Edgelords

Clarifying and predicting AGI

9 May 2023 · The Inside View

Richard Ngo on large language models, OpenAI, and striving to make the future go well

80,000 Hours

13 - First Principles of AGI Safety

30 March 2022 · Axrp The Ai Xrisk Research Podcast

Curated by Alejandro Ortega.

A selection of writing from papers and the Alignment Forum

Add to my feed

"The alignment problem from a deep learning perspective" (Sections 2, 3 and 4) by Richard Ngo, Lawrence Chan & Sören Mindermann

arxiv.org

"AGI safety career advice" by Richard Ngo

EA Forum Team

"A short introduction to machine learning" by Richard Ngo

alignmentforum.org

"Visualizing the deep learning revolution" by Richard Ngo

Blue Dot Impact

LessWrong (Curated) podcast cover image

Playlists featuring Richard Ngo, among other writers.

Could AI cause human extinction? An introduction

Arguments for and against.


All the narrations of their writing that we made or found on the internet.

Add to my feed

“Gradient hacking: definitions and examples”

21 June 2023 · AI Safety Fundamentals: Alignment 201

"The ants and the grasshopper"

6 June 2023 · LessWrong Curated Podcast

"The alignment problem from a deep learning perspective" (Sections 2, 3 and 4) by Richard Ngo, Lawrence Chan & Sören Mindermann

arxiv.org

“Careers in alignment”

13 May 2023 · AI Safety Fundamentals: Alignment

Click “Add to feed” on episodes, playlists, and people.

Listen online or via your podcast app.

your feed screenshot
podcast app screenshot