Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and regular improvements. Please share your thoughts.
Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and regular improvements. Please share your thoughts.
Playlist
Audio narrations of articles and blog posts.
This is a linkpost for https://www.forourposterity.com/want-to-win-the-agi-race-solve-alignment/
Society really cares about safety. Practically speaking, the binding constraint on deploying your AGI could well be your ability to align your AGI. Solving (scalable) alignment might be worth lots of $$$ and key to beating China.
Look, I really don't want Xi Jinping Thought to rule the world. If China gets AGI first, the ensuing rapid AI-powered scientific and technological progress could well give it a decisive advantage (cf potential for >30%/year economic growth with AGI). I think there's a very real specter of global authoritarianism here.
Or hey, maybe you just think AGI is cool. You want to go build amazing products and enable breakthrough science and solve the world’s problems.
So, race to AGI with reckless abandon then? At this point, people get into agonizing discussions about safety tradeoffs. And many people just mood affiliate their way to an answer: "accelerate, progress go brrrr," or "AI scary, slow it down."
I see this much more practically. And, practically, society cares about safety, a lot. Do you actually think that you’ll be able to and allowed to deploy an AI system that has, say, a 10% chance of destroying all of humanity?
Original article:
https://forum.effectivealtruism.org/posts/Ackzs8Wbk7isDzs2n/want-to-win-the-agi-race-solve-alignment
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Click “Add to feed” on episodes, playlists, and people.
Listen online or via your podcast app.