May 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements.
Please share your thoughts, but don't share this link on social media, for now.

Homearrow rightTopics

“Counterarguments to the basic AI risk case” by Katja_Grace

AI Safety: Governance


This is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems.

To start, here’s an outline of what I take to be the basic case:
I. If superhuman AI systems are built, any given system is likely to be ‘goal-directed’
II. If goal-directed superhuman AI systems are built, their desired outcomes will probably be about as bad as an empty universe by human lights
III. If most goal-directed superhuman AI systems have bad goals, the future will very likely be bad

Original article:

Narrated for the Effective Altruism Forum by TYPE III AUDIO.

Share feedback on this narration.