June 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements. Please share your thoughts.

Homearrow rightTopics

“AGI and lock-in” by Lukas Finnveden, Jess Riedel, & Carl Shulman

Artificial Intelligence

Subscribe

The long-term future of intelligent life is currently unpredictable and undetermined. In the linked document, we argue that the invention of artificial general intelligence (AGI) could change this by making extreme types of lock-in technologically feasible. In particular, we argue that AGI would make it technologically feasible to (i) perfectly preserve nuanced specifications of a wide variety of values or goals far into the future, and (ii) develop AGI-based institutions that would (with high probability) competently pursue any such values for at least millions, and plausibly trillions, of years.

The rest of this post contains the summary (6 pages), with links to relevant sections of the main document (40 pages) for readers who want more details.

Original article:
https://forum.effectivealtruism.org/posts/KqCybin8rtfP3qztq/agi-and-lock-in

Narrated for the Effective Altruism Forum by TYPE III AUDIO.

Share feedback on this narration.