June 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements. Please share your thoughts.

Homearrow rightPlaylists

[Week 2] “Deceptively Aligned Mesa-Optimizers: It’s Not Funny If I Have To Explain It” by Scott Alexander

AGI Safety Fundamentals: Alignment

Readings from the AI Safety Fundamentals: Alignment course.



Apple PodcastsSpotifyGoogle PodcastsRSS

Our goal here is to popularize obscure and hard-to-understand areas of AI alignment.

So let’s try to understand the incomprehensible meme!

Our main source will be Hubinger et al 2019, Risks From Learned Optimization In Advanced Machine Learning Systems.

Mesa- is a Greek prefix which means the opposite of meta-. To “go meta” is to go one level up; to “go mesa” is to go one level down (nobody has ever actually used this expression, sorry). So a mesa-optimizer is an optimizer one level down from you.

Consider evolution, optimizing the fitness of animals. For a long time, it did so very mechanically, inserting behaviors like “use this cell to detect light, then grow toward the light” or “if something has a red dot on its back, it might be a female of your species, you should mate with it”. As animals became more complicated, they started to do some of the work themselves. Evolution gave them drives, like hunger and lust, and the animals figured out ways to achieve those drives in their current situation. Evolution didn’t mechanically instill the behavior of opening my fridge and eating a Swiss Cheese slice. It instilled the hunger drive, and I figured out that the best way to satisfy it was to open my fridge and eat cheese.



Crossposted from the Astral Codex Ten podcast.

Share feedback on this narration.