May 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements.
Please share your thoughts, but don't share this link on social media, for now.

Homearrow rightPlaylists

LW - New blog: Planned Obsolescence by Ajeya Cotra

AGI Safety Fundamentals: Alignment

Readings from the AI Safety Fundamentals: Alignment course.


Apple PodcastsSpotifyGoogle PodcastsRSS

Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New blog: Planned Obsolescence, published by Ajeya Cotra on March 27, 2023 on LessWrong. Kelsey Piper and I just launched a new blog about AI futurism and AI alignment called Planned Obsolescence. If you’re interested, you can check it out here. Both of us have thought a fair bit about what we see as the biggest challenges in technical work and in policy to make AI go well, but a lot of our thinking isn’t written up, or is embedded in long technical reports. This is an effort to make our thinking more accessible. That means it’s mostly aiming at a broader audience than LessWrong and the EA Forum, although some of you might still find some of the posts interesting. So far we have seven posts: What we're doing here "Aligned" shouldn't be a synonym for "good" Situational awareness Playing the training game Training AIs to help us align AIs Alignment researchers disagree a lot The ethics of AI red-teaming Thanks to ilzolende for formatting these posts for publication. Each post has an accompanying audio version generated by a voice synthesis model trained on the author's voice using Descript Overdub. You can submit questions or comments to Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit