TYPE III AUDIOTYPE III AUDIO
PlaylistsPeoplePodcastsRequest a narrationBlog

Footer

Services

  • Narrate your writing
  • Blog to podcast
  • Newsletter to podcast
  • Embed audio on your website

Company

  • About us
  • Blog
  • Contact us
  • Terms & Privacy
@type3audio

Subscribe to our newsletter

The latest news, articles, and resources, sent to your inbox weekly.

@type3audio

© 2020 Your Company, Inc. All rights reserved.

Homearrow rightPeoplearrow rightCarl Shulman
Carl Shulman portrait

Carl Shulman

Carl Shulman is a Research Associate at the Future of Humanity Institute, Oxford University, where his work focuses on the long-run impacts of artificial intelligence and biotechnology. He is also an Advisor to the Open Philanthropy Project.

Links:

Website

All the podcast interviews and talks we can find on the internet. It’s “follow on Twitter”, but for audio. New episodes added often.

Add to my feed

Paul Christiano - Preventing an AI Takeover

31 October 2023 · Dwarkesh Podcast

AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future

26 June 2023 · Dwarkesh Podcast

Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment

14 June 2023 · Dwarkesh Podcast (Lunar Society formerly)

Carl Shulman on the common-sense case for existential risk work and its practical implications

5 October 2021 · 80,000 Hours Podcast

Curated by Alejandro Ortega.

A selection of writing from papers and the EA Forum

Add to my feed

"How much should governments pay to prevent catastrophes? Longtermism’s limited role" by Elliott Thornley and Carl Shulman

22 March 2023 · EA Forum Podcast (Curated & popular)

"AGI and lock-in" by Lukas Finnveden, Jess Riedel, & Carl Shulman

27 November 2022 · EA Forum Podcast (Curated & popular)

Propositions Concerning Digital Minds and Society (2022)

25 August 2022 · Introduction to Nick Bostrom

Playlists featuring Carl Shulman, among other writers.

AI Timelines

When should we expect to see transformative AI?

AI takeoff speeds

Should we expect there to be a sharp discontinuity in AI capabilities once they reach human-level?

Digital Minds as Moral Patients

Will digital minds be capable of pleasure and pain? How can we know?


Click “Add to feed” on episodes, playlists, and people.

Listen online or via your podcast app.

your feed screenshot
podcast app screenshot