TYPE III AUDIOTYPE III AUDIO
PlaylistsPeoplePodcastsRequest a narrationBlog

Footer

Services

  • Narrate your writing
  • Blog to podcast
  • Newsletter to podcast
  • Embed audio on your website

Company

  • About us
  • Blog
  • Contact us
  • Terms & Privacy
@type3audio

Subscribe to our newsletter

The latest news, articles, and resources, sent to your inbox weekly.

@type3audio

© 2020 Your Company, Inc. All rights reserved.

Homearrow rightPeoplearrow rightAjeya Cotra
Ajeya Cotra portrait

Ajeya Cotra

Ajeya Cotra is a Senior Research Analyst at Open Philanthropy.

Links:

WebsiteTwitter

All the podcast interviews and talks we can find on the internet. It’s “follow on Twitter”, but for audio. New episodes added often.

Add to my feed

Highlights: #151 – Ajeya Cotra on accidentally teaching AI models to deceive us

2 August 2023 · 80k After Hours

Ajeya Cotra on accidentally teaching AI models to deceive us

12 May 2023 · 80,000 Hours Podcast

Thinking Clearly in a Rapidly Changing World

10 November 2022 · Future of Life Institute Podcast

How Artificial Intelligence Could Cause Catastrophe

3 November 2022 · Future of Life Institute Podcast

Curated by Alejandro Ortega.

A selection of writing from LessWrong and her blog

Add to my feed

"Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover" by Ajeya Cotra

LessWrong

“Why AI alignment could be hard with modern deep learning” by Ajeya Cotra

11 May 2023 · AI Safety Fundamentals: Governance

"Two-year update on my personal AI timelines" by Ajeya Cotra

LessWrong

AIs accelerating AI research

4 April 2023 · Audio for Planned Obsolescence

Audio for Planned Obsolescence podcast cover imageEffective Altruism Forum (Curated & Popular) podcast cover imageLessWrong (Curated) podcast cover image

Playlists featuring Ajeya Cotra, among other writers.

Could AI cause human extinction? An introduction

Arguments for and against.

AI Timelines

When should we expect to see transformative AI?

The debate around bioanchors

Can we predict AI progress by looking at biological intelligence?


All the narrations of their writing that we made or found on the internet.

Add to my feed

"Why AI alignment could be hard with modern deep learning"

12 May 2023 · AI Safety Fundamentals: Alignment 201

"Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover"

LessWrong

"Two-year update on my personal AI timelines"

LessWrong

Click “Add to feed” on episodes, playlists, and people.

Listen online or via your podcast app.

your feed screenshot
podcast app screenshot