Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and regular improvements. Please share your thoughts.
Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and regular improvements. Please share your thoughts.
Playlist
A selection of writing from papers and the EA Forum
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
We’ve recently published on our website a summary of our paper on catastrophic risks from AI, which we are cross-posting here. We hope that this summary helps to make our research more accessible and to share our policy recommendations in a more convenient format. (Previously we had a smaller summary as part of this post, which we found to be insufficient. As such, we have written this post and have removed that section to avoid being duplicative.)
Executive summary
Catastrophic AI risks can be grouped under four key categories which we explore below, and in greater depth in CAIS’ linked paper:
https://www.lesswrong.com/posts/9dNxz2kjNvPtiZjxj/an-overview-of-catastrophic-ai-risks-summary
18 August 2023
Click “Add to feed” on episodes, playlists, and people.
Listen online or via your podcast app.