May 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements.
Please share your thoughts, but don't share this link on social media, for now.
We only have recent episodes right now, and there are some false positives. Will be fixed soon!
Rediffusion de l'épisode #19 :
Isaac Asimov l’avançait dès 1942 avec ses trois lois de la robotique : l’intelligence artificielle doit être contrôlée au plus profond de ses fondements pour qu’elle ne puisse jamais s’attaquer à l’Homme. Mais comment s’assurer qu’une "superintelligence" ne se révèlera pas hostile à la survie de l’humanité ?
Dans cet ouvrage unique, best-seller international traduit en 19 langues, Nick Bostrom nous révèle les difficultés que la recherche d’une intelligence supérieure va nous poser et comment les résoudre. Il s’agit sans doute du plus grand défi auquel l’humanité aura à faire face. Il faut s’y préparer.
According to the orthogonality thesis, intelligent agents may have an enormous range of possible final goals. Nevertheless, according to what we may term the “instrumental convergence” thesis, there are some instrumental goals likely to be pursued by almost any intelligent agent, because there are some objectives that are useful intermediaries to the achievement of almost any final goal. We can formulate this thesis as follows:
The instrumental convergence thesis:
"Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents."
Original article:
https://drive.google.com/file/d/1KewDov1taegTzrqJ4uurmJ2CJ0Y72EU3/view
Author:
Nick Bostrom
---
This article is featured on the AGI Safety Fundamentals: Alignment course curriculum.
Narrated by TYPE III AUDIO on behalf of BlueDot Impact.
According to the orthogonality thesis, intelligent agents may have an enormous range of possible final goals. Nevertheless, according to what we may term the “instrumental convergence” thesis, there are some instrumental goals likely to be pursued by almost any intelligent agent, because there are some objectives that are useful intermediaries to the achievement of almost any final goal. We can formulate this thesis as follows:
The instrumental convergence thesis:
"Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents."
Original article:
https://drive.google.com/file/d/1KewDov1taegTzrqJ4uurmJ2CJ0Y72EU3/view
Author:
Nick Bostrom
---
This article is featured on the AGI Safety Fundamentals: Alignment course curriculum.
Narrated by TYPE III AUDIO on behalf of BlueDot Impact.
In this episode, we explore philosopher Nick Bostrom's proposal for creating AI that understands and pursues human values. We discuss how such AI could be motivated to act based on predicted human approval, ultimately harnessing its intelligence to address the complex problem of value-loading.
This podcast was originally produced on Airchat: https://www.getairchat.com/bretthall/bretthall It is all about an interview that can be found here: https://www.youtube.com/watch?v=4fv4Uz_i1iQ
Learn on your own terms. Get the PDF, infographic, full ad-free audiobook and animated version of this summary and a lot more on the top-rated StoryShots app: https://www.getstoryshots.com
Help us grow and create more amazing content for you! ⭐️⭐️⭐️⭐️⭐️ Don't forget to subscribe, rate and review the StoryShots podcast now.
What should our next book be? Suggest and vote it up on the StoryShots app.
StoryShots Book Summary and Review of Superintelligence: Paths, Dangers, Strategies by Nick BostromLife gets busy. Has Superintelligence been on your reading list? Learn the key insights now. We're scratching the surface here.
If you don't already have Nick Bostrom's popular book on artificial intelligence and technology, order it here or get the audiobook for free to learn the juicy details.
IntroductionWhat happens when artificial intelligence surpasses human intelligence? Machines can think, learn, and solve complex problems faster and more accurately than we can. This is the world that Nick Bostrom explores in his book, Superintelligence. Advances in artificial intelligence are bringing us closer to creating superintelligent beings.
Big tech companies like Microsoft, Google, and Facebook are all racing to create a super powerful AI. They're pouring a lot of resources into research and development to make it happen. But here's the catch: without the right safety measures and rules in place, things might go haywire. That's why it's important to step in and make sure AI stays under control.
Imagine a world where machines are not only cheaper but also way better at doing jobs than humans. In that world, machines might take over human labor, leaving people wondering, "What now?" So it's important to come up with creative solutions to make sure everyone's taken care of.
The book shows what happens after superintelligence emerges. It examines the growth of intelligence, the forms and powers of superintelligence, and its strategic choices. We have to prepare now to avoid disasters later. Bostrom offers strategies to navigate the dangers and challenges it presents.
Superintelligence examines the history of artificial intelligence and the development of technological growth. The book describes how AI is growing faster than its technological predecessors. It also looks at surveys of expert opinions regarding its future progress.
Sam Altman, the co-founder of OpenAI, calls Superintelligence a must-read for anyone who cares about the future of humanity. He even included it on his list of the nine books he thinks everyone should read.
Site: This summary will delve into the fascinating and sometimes frightening world of superintelligence. It provides you with an engaging overview of Bostrom's key ideas.
Nick Bostrom is a Swedish philosopher and futurist. He is known for his groundbreaking work in artificial intelligence and its impact on humanity. Bostrom is a professor at the University of Oxford, where he founded the Future of Humanity Institute. In particular, he conducts research in how advanced technologies and AI can benefit and harm society.
In addition to Superintelligence, Bostrom has authored other influential works, including Anthropic Bias: Observation Selection Effects in Science and Philosophy and Global Catastrophic Risks. His work has contributed to the ongoing discussion of humanity's future.
Learn more about your ad choices. Visit megaphone.fm/adchoices
According to the orthogonality thesis, intelligent agents may have an enormous range of possible final goals. Nevertheless, according to what we may term the “instrumental convergence” thesis, there are some instrumental goals likely to be pursued by almost any intelligent agent, because there are some objectives that are useful intermediaries to the achievement of almost any final goal. We can formulate this thesis as follows:
The instrumental convergence thesis:
"Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents."
Original article:
https://drive.google.com/file/d/1KewDov1taegTzrqJ4uurmJ2CJ0Y72EU3/view
Author:
Nick Bostrom
Narrated for the AGI Safety Fundamentals Course by TYPE III AUDIO.
En este capítulo hablamos largo y tendido sobre IAs, Inteligencia Artificial General (IAGs) y sus peligros y oportunidades dados los desarrollos a velocidad de vértigo de las últimas semanas con ChatGPT4, Midjourney 5 y otros modelos. ¿Estamos cerca de una AIG? ¿Qué es la explosión de inteligencia? ¿Podremos controlar a las IAGs que creemos? ¿Puede ser que la consciencia emerja en una IA sin nosotros buscarlo?
Síguenos en:
Notas y links:
- Highlights y resumen del paper sobre GPT 4 que comentamos en el episodio: https://twitter.com/DeMasPodcast/status/1639093317781827585
- Nick Bostrom sobre IAG: https://twitter.com/DeMasPodcast/status/1639092776016166912
- GPT4 engañó a un humano durante pruebas de seguridad: https://www.vice.com/en/article/jg5ew4/gpt4-hired-unwitting-taskrabbit-worker
- Midjourney 5 genera imágenes fotorealistas súper impactantes: https://arstechnica.com/information-technology/2023/03/ai-imager-midjourney-v5-stuns-with-photorealistic-images-and-5-fingered-hands/
- Carta de “Future of Life Institute” firmada por Steve Wozniak, Elon Musk, Yuval Noah Harari y muchos otros dónde se pide a las empresas y institutos de investigación trabajando en modelos de IA que PAUSEN el desarrollo de modelos más potentes que GPT4 hasta haya las medidas de seguridad adecuadas para evitar riesgos existenciales para la raza humana: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Nick Bostrom should step down as Director of FHI. He should move into a role as a Senior Research Fellow at FHI, and remain a Professor of Philosophy at Oxford University.
I don't seek to minimize his intellectual contribution. His seminal 2002 paper on existential risk launched a new sub-field of existential risk research (building on many others). The 2008 book on Global Catastrophic Risks he co-edited was an important part of bringing together this early field. 2014’s Superintelligence put AI risk squarely onto the agenda. And he has made other contributions across philosophy from human enhancement to the simulation hypothesis. I'm not denying that. I'm not seeking to cancel him and prevent him from writing further papers and books. In fact, I want him to spend more time on that.
But I don’t think he’s been a particularly good Director of FHI. These difficulties are demonstrated by and reinforced by his Apology. I think he should step down for the good of FHI and the field. This post has some hard truths and may be uncomfortable reading, but FHI and the field are more important than that discomfort.
Pre-existing issues
Bostrom was already struggling as Director. In the past decade, he’s churned through 5-10 administrators, due to his persistent micromanagement. He discouraged investment in the relationship with the University and sought to get around/streamline/reduce the bureaucracy involved with being part of the University.
All of this contributed to the breakdown of the relationship with the Philosophy Faculty (which FHI is a part of). This led the Faculty to impose a hiring freeze a few years ago, preventing FHI from hiring more people until they had resolved administrative problems. Until then, FHI could rely on a constant churn of new people to replace the people burnt out and/or moving on. The hiring freeze stopped the churn. The hiring freeze also contributed in part to the end of the Research Scholars Program and Cotton-Barratt’s resignation from FHI. It also contributed in part to the switch of almost all of the AI Governance Research Group to the Center for the Governance of AI.
Apology
Then in January 2023, Bostrom posted an Apology for an Old Email.
In my personal opinion, this statement demonstrated his lack of aptitude and lack of concern for his important role. These are sensitive topics that need to be handled with care. But the Apology had a glib tone, reused the original racial slur, seemed to indicate he was still open to discredited ‘race science’ hypotheses, and had an irrelevant digression on eugenics. I personally think these are disqualifying views for someone in his position as Director. But also, any of these issues would presumably have been flagged by colleagues or a communications professional. It appears he didn't check this major statement with anyone or seek feedback. Being Director of a major research center in an important but controversial field requires care, tact, leadership and attention to downside risks. The Apology failed to demonstrate that.
The Apology has had the effect of complicating many important relationships for FHI: with the University, with staff, with funders and with collaborators. Bostrom will now struggle even more to lead the center.
First, University. The Faculty was already concerned, and Oxford University is now investigating. Oxford University released a statement to The Daily Beast:
“The University and Faculty of Philosophy is currently investigating the matter but condemns in the strongest terms possible the views this particular academic expressed in his communications. Neither the content nor language are in line with our strong commitment to diversity and equality.”
B...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nick Bostrom should step down as Director of FHI, published by BostromAnonAccount on March 4, 2023 on The Effective Altruism Forum. Nick Bostrom should step down as Director of FHI. He should move into a role as a Senior Research Fellow at FHI, and remain a Professor of Philosophy at Oxford University. I don't seek to minimize his intellectual contribution. His seminal 2002 paper on existential risk launched a new sub-field of existential risk research (building on many others). The 2008 book on Global Catastrophic Risks he co-edited was an important part of bringing together this early field. 2014’s Superintelligence put AI risk squarely onto the agenda. And he has made other contributions across philosophy from human enhancement to the simulation hypothesis. I'm not denying that. I'm not seeking to cancel him and prevent him from writing further papers and books. In fact, I want him to spend more time on that. But I don’t think he’s been a particularly good Director of FHI. These difficulties are demonstrated by and reinforced by his Apology. I think he should step down for the good of FHI and the field. This post has some hard truths and may be uncomfortable reading, but FHI and the field are more important than that discomfort. Pre-existing issues Bostrom was already struggling as Director. In the past decade, he’s churned through 5-10 administrators, due to his persistent micromanagement. He discouraged investment in the relationship with the University and sought to get around/streamline/reduce the bureaucracy involved with being part of the University. All of this contributed to the breakdown of the relationship with the Philosophy Faculty (which FHI is a part of). This led the Faculty to impose a hiring freeze a few years ago, preventing FHI from hiring more people until they had resolved administrative problems. Until then, FHI could rely on a constant churn of new people to replace the people burnt out and/or moving on. The hiring freeze stopped the churn. The hiring freeze also contributed in part to the end of the Research Scholars Program and Cotton-Barratt’s resignation from FHI. It also contributed in part to the switch of almost all of the AI Governance Research Group to the Center for the Governance of AI. Apology Then in January 2023, Bostrom posted an Apology for an Old Email. In my personal opinion, this statement demonstrated his lack of aptitude and lack of concern for his important role. These are sensitive topics that need to be handled with care. But the Apology had a glib tone, reused the original racial slur, seemed to indicate he was still open to discredited ‘race science’ hypotheses, and had an irrelevant digression on eugenics. I personally think these are disqualifying views for someone in his position as Director. But also, any of these issues would presumably have been flagged by colleagues or a communications professional. It appears he didn't check this major statement with anyone or seek feedback. Being Director of a major research center in an important but controversial field requires care, tact, leadership and attention to downside risks. The Apology failed to demonstrate that. The Apology has had the effect of complicating many important relationships for FHI: with the University, with staff, with funders and with collaborators. Bostrom will now struggle even more to lead the center. First, University. The Faculty was already concerned, and Oxford University is now investigating. Oxford University released a statement to The Daily Beast: “The University and Faculty of Philosophy is currently investigating the matter but condemns in the strongest terms possible the views this particular academic expressed in his communications. Neither the content nor language are in line with our strong commitment to diversity and equality.” B...
Es war einmal vor langer, langer Zeit, da wurde unser Planet von einem riesigen Drachen tyrannisiert. Der Drache überragte selbst die höchste Kathedrale und war mit einem dicken Panzer aus schwarzen Schuppen bedeckt. Seine roten Augen glühten vor Hass und aus seinem furchtbaren Maul floss beständig ein übel riechender, gelblich-grüner Schleim. Er verlangte der Menschheit einen Furcht einflößenden Tribut ab: Um seinen gigantischen Appetit zu stillen, mussten jeden Tag beim Einbruch der Dunkelheit zehntausend Männer und Frauen zum Fuß des Berges gebracht werden, wo der tyrannische Drache lebte. Manchmal verschlang der Drache die Unglücklichen sofort; manchmal wiederum kerkerte er sie im Berg ein. Dort mussten sie Monate oder Jahre dahinsiechen, bis sie schließlich verspeist wurden.
In dieser Folge erhältst du Post. Absender?
Eine Deiner möglichen Zukunftsformen, die von einer Zeit kündigt, so wundervoll und wünschenswert, das Worte ihr kaum gerecht werden können. Ein motivierender Aufruf für das Streben nach unserer besten Zukunft.
ABSTRACT. Mit sehr fortschrittlicher Technologie könnte eine sehr große Population von Personen, die glückliche Leben leben, in der erreichbaren Region des Universums aufrechterhalten werden. Daraus ergeben sich entsprechende Opportunitätskosten für jedes Jahr, um das sich die Entwicklung solcher Technologien und die folgliche Kolonialisierung des Universums verzögert: Ein potenzielles Gut, nämlich das lebenswerter Leben, wird nicht realisiert. Unter plausiblen Annahmen sind diese Kosten extrem groß. Jedoch lautet die Lehre für den Standardutilitaristen nicht, dass wir die Geschwindigkeit des technologischen Fortschritts maximieren sollten, sondern dass wir seine Sicherheit maximieren sollten, d. h. die Wahrscheinlichkeit, dass die Kolonisierung des Weltalls tatsächlich stattfinden wird. Dieses Ziel hat eine so hohe Utilität, dass Standardutilitaristen all ihre Energie darauf verwenden sollten. Utilitaristen der „personenbezogenen“ Sorte sollten eine modifizierte Version dieser Schlussfolgerung akzeptieren. Manch andere ethische Sichtweisen, welche utilitaristische Erwägungen mit anderen Kriterien kombinieren, werden zu einem ähnlichen Fazit kommen.
Vollständiger Text unter:
https://effektiveraltruismus.audio/episode/astronomische-verschwendung---die-opportunitatskosten-verzogerten-technologischen-fortschritts-von-nick-bostrom
ABSTRACT. Mit sehr fortschrittlicher Technologie könnte eine sehr große Population von Personen, die glückliche Leben leben, in der erreichbaren Region des Universums aufrechterhalten werden. Daraus ergeben sich entsprechende Opportunitätskosten für jedes Jahr, um das sich die Entwicklung solcher Technologien und die folgliche Kolonialisierung des Universums verzögert: Ein potenzielles Gut, nämlich das lebenswerter Leben, wird nicht realisiert. Unter plausiblen Annahmen sind diese Kosten extrem groß. Jedoch lautet die Lehre für den Standardutilitaristen nicht, dass wir die Geschwindigkeit des technologischen Fortschritts maximieren sollten, sondern dass wir seine Sicherheit maximieren sollten, d. h. die Wahrscheinlichkeit, dass die Kolonisierung des Weltalls tatsächlich stattfinden wird. Dieses Ziel hat eine so hohe Utilität, dass Standardutilitaristen all ihre Energie darauf verwenden sollten. Utilitaristen der „personenbezogenen“ Sorte sollten eine modifizierte Version dieser Schlussfolgerung akzeptieren. Manch andere ethische Sichtweisen, welche utilitaristische Erwägungen mit anderen Kriterien kombinieren, werden zu einem ähnlichen Fazit kommen.
This is an audio narration of the German translation of The Fable of the Dragon-Tyrant by Nick Bostrom. The translation was done by Franz Fuchs, edited by Stephan Dalügge and narrated by Uta Reichardt. You can find the original paper at nickbostrom.com. Links and related reading suggestions are in the episode description.
Es war einmal vor langer, langer Zeit, da wurde unser Planet von einem riesigen Drachen tyrannisiert. Der Drache überragte selbst die höchste Kathedrale und war mit einem dicken Panzer aus schwarzen Schuppen bedeckt. Seine roten Augen glühten vor Hass und aus seinem furchtbaren Maul floss beständig ein übel riechender, gelblich-grüner Schleim. Er verlangte der Menschheit einen Furcht einflößenden Tribut ab: Um seinen gigantischen Appetit zu stillen, mussten jeden Tag beim Einbruch der Dunkelheit zehntausend Männer und Frauen zum Fuß des Berges gebracht werden, wo der tyrannische Drache lebte. Manchmal verschlang der Drache die Unglücklichen sofort; manchmal wiederum kerkerte er sie im Berg ein. Dort mussten sie Monate oder Jahre dahinsiechen, bis sie schließlich verspeist wurden.