September 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements. Please share your thoughts.
Listen to the best discussions of AI safety, AI governance, and the long-term future.
Holden Karnofsky is Director of AI Strategy at Open Philanthropy. He previously co-founded Open Philanthropy and Givewell.
Paul Cristiano runs the Alignment Research Center (ARC). Paul previously ran the language model alignment team at OpenAI.
Ajeya Cotra is a Senior Research Analyst at Open Philanthropy. She’s currently thinking about the likelihood that powerful AI systems may be misaligned, and what technical work may help to reduce that risk.
Katja Grace is the lead researcher at AI Impacts, an AI-safety project which aims to improve our understanding of the likely impacts of human-level artificial intelligence
Carl Shulman is a Research Associate at the Future of Humanity Institute, Oxford University, where his work focuses on the long-run impacts of artificial intelligence and biotechnology. He is also an Advisor to the Open Philanthropy Project.
Lennart Heim is an AI Governance researcher at the Centre for the Governance of AI (GovAI), focusing on Compute Governance.
Richard Ngo works on the Governance team at OpenAI. He was previously a research engineer on the AGI safety team at DeepMind.
Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University.
Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea of a "fire alarm" for AI. He is a co-founder and research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.
Robin Hanson hashes out AI disagreement with guests like Scott Aaronson, Katja Grace,and Zvi Mowshowitz.
How should we think about economic growth in the the very long term? And how do these considerations affect our actions now?
Will AI systems converge on power-seeking behaviours? Or perhaps on moral virtues, or deontic constraints?
A weekly podcast where hosts Erik Torenberg and Nathan Labenz interview the builders on the edge of AI.