Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and regular improvements. Please share your thoughts.
Readings from the AI Safety Fundamentals: Governance course.
To solve rogue AIs, we’ll have to align them. In this article by Adam Jones of BlueDot Impact, Jones introduces the concept of aligning AIs. He defines alignment as “making AI systems try to do what their creators intend them to do.”
Original text:
https://aisafetyfundamentals.com/blog/what-is-ai-alignment/
Author:
Adam Jones
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.
Click “Add to feed” on episodes, playlists, and people.
Listen online or via your podcast app.