Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and regular improvements. Please share your thoughts.
Readings from the AI Safety Fundamentals: Alignment course.
Why would we program AI that wants to harm us? Because we might not know how to do otherwise.
Source:
https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/
Crossposted from the Cold Takes Audio podcast.
Click “Add to feed” on episodes, playlists, and people.
Listen online or via your podcast app.