May 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements.
Please share your thoughts, but don't share this link on social media, for now.

Homearrow rightPlaylists

Ajeya Cotra on how Artificial Intelligence Could Cause Catastrophe

AGI Safety Fundamentals: Governance

Readings from the AI Safety Fundamentals: Governance course.

https://agisafetyfundamentals.com

Subscribe

Apple PodcastsSpotifyGoogle PodcastsRSS

Podcast: Future of Life Institute Podcast (LS 44 · TOP 1% )
Episode: Ajeya Cotra on how Artificial Intelligence Could Cause Catastrophe
Release date: 2022-11-03


Ajeya Cotra joins us to discuss how artificial intelligence could cause catastrophe. Follow the work of Ajeya and her colleagues: https://www.openphilanthropy.org Timestamps: 00:00 Introduction 00:53 AI safety research in general 02:04 Realistic scenarios for AI catastrophes 06:51 A dangerous AI model developed in the near future 09:10 Assumptions behind dangerous AI development 14:45 Can AIs learn long-term planning? 18:09 Can AIs understand human psychology? 22:32 Training an AI model with naive safety features 24:06 Can AIs be deceptive? 31:07 What happens after deploying an unsafe AI system? 44:03 What can we do to prevent an AI catastrophe? 53:58 The next episode