June 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements. Please share your thoughts.

Homearrow rightPlaylists

[Week 7] “The longtermist AI governance landscape: a basic overview” by Sam Clarke

AGI Safety Fundamentals: Governance

Readings from the AI Safety Fundamentals: Governance course.



Apple PodcastsSpotifyGoogle PodcastsRSS

Aim: to give a basic overview of what is going on in longtermist AI governance. Audience: people who have limited familiarity with longtermist AI governance and want to understand it better. I don’t expect this to be helpful for those who already have familiarity with the field. ETA: Some people who were already quite familiar with the field have found this helpful. This post outlines the different kinds of work happening in longtermist AI governance. For each kind of work, I’ll explain it, give examples, sketch some stories for how it could have a positive impact, and list the actors I’m aware of who are currently working on it. Firstly, some definitions: AI governance means bringing about local and global norms, policies, laws, processes, politics, and institutions (not just governments) that will affect social outcomes from the development and deployment of AI systems.\[2\] Longtermist AI governance, in particular, is the subset of this work that is motivated by a concern for the very long-term impacts of AI. This overlaps significantly with work aiming to govern transformative AI (TAI). It’s worth noting that the field of longtermist AI governance is very small. I’d guess that there are around 60 people working in AI governance who are motivated by a concern for very long-term impacts.



Narrated for AGI Safety Fundamentals by TYPE III AUDIO.

Share feedback on this narration.