June 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements. Please share your thoughts.
June 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements. Please share your thoughts.
Readings from the AI Safety Fundamentals: Alignment course.
https://agisafetyfundamentals.com
Within the coming decades, artificial general intelligence (AGI) may surpass human capabilities at a wide range of important tasks. We outline a case for expecting that, without substantial effort to prevent it, AGIs could learn to pursue goals which are undesirable (i.e. misaligned) from a human perspective. We argue that if AGIs are trained in ways similar to today's most capable models, they could learn to act deceptively to receive higher reward, learn internally-represented goals which generalize beyond their training distributions, and pursue those goals using power-seeking strategies. We outline how the deployment of misaligned AGIs might irreversibly undermine human control over the world, and briefly review research directions aimed at preventing this outcome.
Original article:
https://arxiv.org/abs/2209.00626
Authors:
Richard Ngo, Lawrence Chan, Sören Mindermann
---
This article is featured on the AGI Safety Fundamentals: Alignment course curriculum.
Narrated by TYPE III AUDIO on behalf of BlueDot Impact.