Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and regular improvements. Please share your thoughts.
Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and regular improvements. Please share your thoughts.
Readings from the AI Safety Fundamentals: Governance course.
Developments in AI could exacerbate long-running catastrophic risks, including bioterrorism, disinformation and resulting institutional dysfunction, misuse of concentrated power, nuclear and conventional war, other coordination failures, and unknown risks. This document compiles research on how AI might raise these risks. (Other material in this course discusses more novel risks from AI.) We draw heavily from previous overviews by academics, particularly Dafoe (2020) and Hendrycks et al. (2023).
Source:
https://aisafetyfundamentals.com/governance-blog/overview-of-ai-risk-exacerbation
Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.
---
Click “Add to feed” on episodes, playlists, and people.
Listen online or via your podcast app.