Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and regular improvements. Please share your thoughts.
Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and regular improvements. Please share your thoughts.
Explainer podcast for Richard Ngo's "Clarifying and predicting AGI" post on Lesswrong, which introduces the t-AGI framework to evaluate AI progress. A system is considered t-AGI if it can outperform most human experts, given time t, on most cognitive tasks. This is a new format, quite different from the interviews and podcasts I have been recording in the past. If you enjoyed this, let me know in the YouTube comments, or on twitter, @MichaelTrazzi.
Youtube: https://youtu.be/JXYcLQItZsk
Clarifying and predicting AGI: https://www.alignmentforum.org/posts/BoA3agdkAzL6HQtQP/clarifying-and-predicting-agi
Click “Add to feed” on episodes, playlists, and people.
Listen online or via your podcast app.