May 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements.
Please share your thoughts, but don't share this link on social media, for now.
May 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements.
Please share your thoughts, but don't share this link on social media, for now.
Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.
Source:
https://forum.effectivealtruism.org/posts/8Z2uFCkrg2dCnadA4/kfc-supplier-sued-for-cruelty
Did you find this narration helpful? How could it be improved?
As an experiment, we’re releasing AI narrations of all new posts with >125 karma on the “EA Forum (All audio)” podcast.
Please share your thoughts and/or report bugs in the narration.
Narrated by TYPE III AUDIO.
Charity Entrepreneurship is frequently contacted by individuals and donors who like our model. Several have expressed interest in seeing the model expanded, or seeing what a twist on the model would look like (e.g., different cause area, region, etc.) Although we are excited about maximizing CE’s impact, we are less convinced by the idea of growing the effective charity pool via franchising or other independent nonprofit incubators. This is because new incubators often do not address the actual bottlenecks faced by the nonprofit landscape, as we see them. There are lots of factors that prevent great new charities from being launched, and from eventually having a large impact. We have scaled CE to about 10 charities a year, and from our perspective, these are the three major bottlenecks to growing the new charity ecosystem further: Mid-stage funding, Founders and Multiplying effects.
Source:
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Source:
https://forum.effectivealtruism.org/posts/jpsugrAbjsgfm9gZM/eag-talks-are-underrated-imo
Did you find this narration helpful? How could it be improved?
As an experiment, we’re releasing AI narrations of all new posts with >125 karma on the “EA Forum (All audio)” podcast.
Please share your thoughts and/or report bugs in the narration.
Narrated by TYPE III AUDIO.
Did you find this narration helpful? How could it be improved?
As an experiment, we’re releasing AI narrations of all new posts with >125 karma on the “EA Forum (All audio)” podcast.
Please share your thoughts and/or report bugs in the narration.
Narrated by TYPE III AUDIO.
We've just passed the half-year mark for this project! If you're reading this, please consider taking this 5 minute survey — all questions optional. If you listen to the podcast version, we have a separate survey for that here. Thanks to everyone that has responded to this already!
Original text:
https://forum.effectivealtruism.org/posts/9QcmyGAjERHRFfrr7/summaries-of-top-forum-posts-1st-to-7th-may-2023
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
How worried about AI risk will we feel in the future, when we can see advanced machine intelligence up close? We should worry accordingly now.
Original article:
https://joecarlsmith.com/2023/05/08/predictable-updating-about-ai-risk
Narrated by Joe Carlsmith and included on the Effective Altruism Forum by TYPE III AUDIO.
We've just passed the half-year mark for this project! If you're reading this, please consider taking this 5 minute survey — all questions optional. If you listen to the podcast version, we have a separate survey for that here. Thanks to everyone that has responded to this already!
Original text:
https://forum.effectivealtruism.org/posts/wzn7hEj3BSz7us7ge/summaries-of-top-forum-posts-24th-30th-april-2023
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
People often ask me for career advice related to AGI safety. This post summarizes the advice I most commonly give. I’ve split it into three sections: general mindset, alignment research and governance work. For each of the latter two, I start with high-level advice aimed primarily at students and those early in their careers, then dig into more details of the field. See also this post I wrote two years ago, containing a bunch of fairly general career advice. ## General mindset In order to have a big impact on the world you need to find a big lever. This document assumes that you think, as I do, that AGI safety is the biggest such lever. There are many ways to pull on that lever, though—from research and engineering to operations and field-building to politics and communications. I encourage you to choose between these based primarily on your personal fit—a combination of what you're really good at and what you really enjoy. In my opinion the difference between being a great versus a mediocre fit swamps other differences in the impactfulness of most pairs of AGI-safety-related jobs.
Original article:
https://forum.effectivealtruism.org/posts/xg7gxsYaMa6F3uH8h/agi-safety-career-advice
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Original article:
https://forum.effectivealtruism.org/posts/m2Y6HheC2Q2GLQ3oS/summaries-of-top-forum-posts-17th-23rd-april-2023
This podcast has just passed the 6-month mark! Please give us your feedback and suggestions so we can continue to improve — the survey should take no more than 10 minutes, and we really appreciate your input!
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
The excellent report from Rethink Priorities was my main source for this. Many of the substantial points I make are taken from it, though errors are my own. It’s worth reading! The authors are Gavriel Kleinwaks, Alastair Fraser-Urquhart, Jam Kraprayoon, and Josh Morrison.
Clean water
In the mid 19th century, London had a sewage problem. It relied on a patchwork of a few hundred sewers, of brick and wood, and hundreds of thousands of cesspits. The Thames — Londoners’ main source of drinking water — was near-opaque with waste. Here is Michael Faraday in an 1855 letter to The Times:
"Near the bridges the feculence rolled up in clouds so dense that they were visible at the surface even in water of this kind […] The smell was very bad, and common to the whole of the water. It was the same as that which now comes up from the gully holes in the streets. The whole river was for the time a real sewer […] If we neglect this subject, we cannot expect to do so with impunity; nor ought we to be surprised if, ere many years are over, a season give us sad proof of the folly of our carelessness."
Original article:
https://forum.effectivealtruism.org/posts/WLok4YuJ4kfFpDRTi/first-clean-water-now-clean-air
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
This is a linkpost for a new 80,000 hours episode focused on how to engage in climate from an effective altruist perspective.
Rob and I are having a pretty wide-ranging conversation, here are the things we cover which I find most interesting for different audiences:
Original article:
https://forum.effectivealtruism.org/posts/A3ZLLanDZZt9sgGQ9/new-80-000-hours-podcast-on-high-impact-climate-philanthropy
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Original article:
https://forum.effectivealtruism.org/posts/o3Gaoizs2So6SpgLH/summaries-of-top-forum-posts-27th-march-to-16th-april
This podcast has just passed the 6-month mark! Please give us your feedback and suggestions so we can continue to improve — the survey should take no more than 10 minutes, and we really appreciate your input!
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
I recently spent some time reflecting on my career and my life, for a few reasons:
I wanted to have a better answer to these questions:
Original article:
https://forum.effectivealtruism.org/posts/2DzLY6YP2z5zRDAGA/a-freshman-year-during-the-ai-midgame-my-approach-to-the
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
This is a linkpost for https://www.forourposterity.com/want-to-win-the-agi-race-solve-alignment/
Society really cares about safety. Practically speaking, the binding constraint on deploying your AGI could well be your ability to align your AGI. Solving (scalable) alignment might be worth lots of $$$ and key to beating China.
Look, I really don't want Xi Jinping Thought to rule the world. If China gets AGI first, the ensuing rapid AI-powered scientific and technological progress could well give it a decisive advantage (cf potential for >30%/year economic growth with AGI). I think there's a very real specter of global authoritarianism here.
Or hey, maybe you just think AGI is cool. You want to go build amazing products and enable breakthrough science and solve the world’s problems.
So, race to AGI with reckless abandon then? At this point, people get into agonizing discussions about safety tradeoffs. And many people just mood affiliate their way to an answer: "accelerate, progress go brrrr," or "AI scary, slow it down."
I see this much more practically. And, practically, society cares about safety, a lot. Do you actually think that you’ll be able to and allowed to deploy an AI system that has, say, a 10% chance of destroying all of humanity?
Original article:
https://forum.effectivealtruism.org/posts/Ackzs8Wbk7isDzs2n/want-to-win-the-agi-race-solve-alignment
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Far fewer people are working on it than you might think, and even the alignment research that is happening is very much not on track. (But it’s a solvable problem, if we get our act together.)
Original article:
This is a linkpost for https://www.forourposterity.com/nobodys-on-the-ball-on-agi-alignment/
https://forum.effectivealtruism.org/posts/5LNxeWFdoynvgZeik/nobody-s-on-the-ball-on-agi-alignment
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
In my last essay, I looked at two stories (brute preference for systematic-ness, and money-pumps) about why ethical anti-realists should still be interested in ethics – two stories about why the “philosophy game” is worth playing, even if there are no objective normative truths, and you’re free to do whatever you want. I think some versions of these stories might well have a role to play; but I find that on their own, they don’t fully capture what feels alive to me about ethics. Here I try to say something that gets closer.
Original article:
https://joecarlsmith.com/2023/02/17/seeing-more-whole
Narrated by Joe Carlsmith and included on the Effective Altruism Forum by TYPE III AUDIO.
Original article:
https://forum.effectivealtruism.org/posts/idpbfmPjHFCvzj46L/ea-and-lw-forum-weekly-summary-13th-19th-march-2023
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. Standard cost-benefit analysis implies that governments should spend much more on reducing catastrophic risk. We argue that a government catastrophe policy guided by cost-benefit analysis should be the goal of longtermists in the political sphere. This policy would be democratically acceptable, and it would reduce existential risk by almost as much as a strong longtermist policy.
Original article:
https://forum.effectivealtruism.org/posts/DiGL5FuLgWActPBsf/how-much-should-governments-pay-to-prevent-catastrophes
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Oral rehydration therapy is now the standard treatment for dehydration. It’s saved millions of lives, and can be prepared at home in minutes. So why did it take so long to discover?
Written by Matt Reynolds for Asterisk Magazine.
Original article:
https://asteriskmag.com/issues/2/salt-sugar-water-zinc-how-scientists-learned-to-treat-the-20th-century-s-biggest-killer-of-children
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Original article:
https://forum.effectivealtruism.org/posts/fWGdsWbS6vtC9E7ii/ea-and-lw-forum-weekly-summary-6th-12th-march-2023
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
Original article:
https://forum.effectivealtruism.org/posts/yCxsz9jk5iau2uvYH/ea-and-lw-forum-weekly-summary-27th-feb-5th-mar-2023
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
I don’t think the existing evidence justifies HLI's estimate of 50% household spillovers.
My main disagreements are:
Original article:
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Original article:
https://forum.effectivealtruism.org/posts/bEJ6SyrkSF45B2LWZ/ea-and-lw-forum-weekly-summary-20th-26th-feb-2023
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
Ethical philosophy often tries to systematize. That is, it seeks general principles that will explain, unify, and revise our more particular intuitions. And sometimes, this can lead to strange and uncomfortable places.
Original article:
https://forum.effectivealtruism.org/posts/fAWotZTEnyycJnuxz/ea-and-lw-forum-weekly-summary-6th-19th-feb-2023
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
This is the story of how I came to see Wild Animal Welfare (WAW) as a less promising cause than I did initially. I summarise three articles I wrote on WAW: ‘Why it’s difficult to find cost-effective WAW interventions we could do now’, ‘Lobbying governments to improve WAW’, and ‘WAW in the far future’. I then draw some more general conclusions. The articles assume some familiarity with WAW ideas. See here or here for an intro to WAW ideas.
Original article:
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Apply now to start a nonprofit in Biosecurity or Large-Scale Global Health
In this post we introduce our top five charity ideas for launch in 2023, in the areas of Biosecurity and Large-Scale Global Health. These are the result of five months’ work from our research team, and a six-stage iterative process that includes collaboration with partners and ideas from within and outside of the EA community.
We’re looking for people to launch these ideas through our July - August 2023 Incubation Program. The deadline for applications is March 12, 2023.
We provide cost-covered two-month training, stipends, ongoing mentorship, and grants up to $200,000 per project. You can learn more on our website. We also invite you to join our event on February 20, 6PM UK Time. Sam Hilton, our Director of Research, will introduce the ideas and answer your questions. Sign up here.
Original article:
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Hi everyone,
I've been reading up on H5N1 this weekend, and I'm pretty concerned. Right now my hunch is that there is a non-zero chance that it will cost more than 10,000 people their lives.
To be clear, I think it is unlikely that H5N1 will become a pandemic anywhere close to the size of covid.
Nevertheless, I think our community should be actively following the news and start thinking about ways to be helpful if the probability increases. I am creating this thread as a place where people can discuss and share information about H5N1. We have a lot of pandemic experts in this community, do chime in!
Original article:
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
This is a linkpost for https://epochai.org/blog/literature-review-of-transformative-artificial-intelligence-timelines
We summarize and compare several models and forecasts predicting when transformative AI will be developed.
Highlights
Original article:
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Original article:
https://forum.effectivealtruism.org/posts/Qzfew7EBPgdCzsxED/ea-and-lw-forum-weekly-summary-30th-jan-5th-feb-2023
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
This post outlines the capability approach to thinking about human welfare. I think that this approach, while very popular in international development, is neglected in EA. While the capability approach has problems, I think that it provides a better approach to thinking about improving human welfare than approaches based on measuring happiness or subjective wellbeing (SWB) or approaches based on preference satisfaction. Finally, even if you disagree that the capability approach is best, I think this post will be useful to you because it may clarify why many people and organizations in the international development or global health space take the positions that they do. I will be drawing heavily on the work of Amartya Sen, but I will often not be citing specific texts because I’m an academic and getting to write without careful citations is thrilling.
Original article:
https://forum.effectivealtruism.org/posts/zy6jGPeFKHaoxKEfT/the-capability-approach
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Original article:
https://forum.effectivealtruism.org/posts/hzc26vGa4RLns7TvK/ea-and-lw-forum-weekly-summary-23rd-29th-jan-23
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
Original article:
https://forum.effectivealtruism.org/posts/6Ezg8HgHib9bpWCFr/ea-and-lw-forum-weekly-summary-16th-22nd-jan-23
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
Original article:
https://forum.effectivealtruism.org/posts/Qk3hd6PrFManj8K6o/rethink-priorities-welfare-range-estimates
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
For many years, I've actively lived in avoidance of idolizing behavior and in pursuit of a nuanced view of even those I respect most deeply. I think this has helped me in numerous ways and has been of particular help in weathering the past few months within the EA community. Below, I discuss how I think about the act of idolizing behavior, some of my personal experiences, and how this mentality can be of use to others.
Note: I want more people to post on the EA Forum and have their ideas taken seriously regardless of whether they conform to Forum stylistic norms. I'm perfectly capable of writing a version of this post in the style typical to the Forum, but this post is written the way I actually like to write. If this style doesn’t work for you, you might want to read the first section “Anarchists have no idols” and then skip ahead to the section “Living without idols, Pt. 1” toward the end. You’ll lose some of the insights contained in my anecdotes, but still get most of the core ideas I want to convey here.
Original article:
https://forum.effectivealtruism.org/posts/jgspXC8GKA7RtxMRE/on-living-without-idols
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Original article:
https://forum.effectivealtruism.org/posts/DNWpFLrtrJXe4mted/ea-and-lw-forum-summaries-9th-jan-to-15th-jan-23
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
I previously wrote an entry for the Open Philanthropy Cause Exploration Prize on why preventing violence against women and girls is a global priority. For an introduction to the area, I have written a brief summary below. In this post, I will extend that work, diving deeper into the literature and the landscape of organisations in the field, as well as creating a cost-effectiveness model for some of the most promising preventative interventions. Based on this, I will offer some concrete recommendations that different stakeholders should take - from individuals looking to donate, to funders, to charity evaluators and incubators.
Original article:
https://forum.effectivealtruism.org/posts/uH9akQzJkzpBD5Duw/what-you-can-do-to-help-stop-violence-against-women-and
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
In this post, we point out that short AI timelines would cause real interest rates to be high, and would do so under expectations of either unaligned or aligned AI. However, 30- to 50-year real interest rates are low. We argue that this suggests one of two possibilities:
In the rest of this post we flesh out this argument.
Original article:
https://forum.effectivealtruism.org/posts/8c7LycgtkypkgYjZx/agi-and-the-emh-markets-are-not-expecting-aligned-or
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Original article:
https://forum.effectivealtruism.org/posts/JZuCg7TtfzzaX9bBY/ea-and-lw-forum-summaries-holiday-edition-19th-dec-8th-jan
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
At Anima International, we recently decided to suspend our campaign against live fish sales in Poland indefinitely. After a few years of running the campaign, we are now concerned about the effects of our efforts, specifically the possibility of a net negative result for the lives of animals. We believe that by writing about it openly we can help foster a culture of intellectual honesty, information sharing and accountability. Ideally, our case can serve as a good example on reflecting on potential unintended consequences of advocacy interventions.
Original article:
https://forum.effectivealtruism.org/posts/snnfmepzrwpAsAoDT/why-anima-international-suspended-the-campaign-to-end-live
This is a linkpost for https://animainternational.org/blog/why-anima-international-suspended-the-campaign-to-end-live-fish-sales-in-poland
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
This is a linkpost for https://simonm.substack.com/p/strongminds-should-not-be-a-top-rated
GWWC lists StrongMinds as a “top-rated” charity. Their reason for doing so is because Founders Pledge has determined they are cost-effective in their report into mental health.
I could say here, “and that report was written in 2019 - either they should update the report or remove the top rating” and we could all go home. In fact, most of what I’m about to say does consist of “the data really isn’t that clear yet”.
I think the strongest statement I can make (which I doubt StrongMinds would disagree with) is:
“StrongMinds have made limited effort to be quantitative in their self-evaluation, haven’t continued monitoring impact after intervention, haven’t done the research they once claimed they would. They have not been vetted sufficiently to be considered a top charity, and only one independent group has done the work to look into them.”
Original article:
https://forum.effectivealtruism.org/posts/ffmbLCzJctLac3rDu/strongminds-should-not-be-a-top-rated-charity-yet
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
If you fear that someone will build a machine that will seize control of the world and annihilate humanity, then one kind of response is to try to build further machines that will seize control of the world even earlier without destroying it, forestalling the ruinous machine’s conquest. An alternative or complementary kind of response is to try to avert such machines being built at all, at least while the degree of their apocalyptic tendencies is ambiguous.
The latter approach seems to me like the kind of basic and obvious thing worthy of at least consideration, and also in its favor, fits nicely in the genre ‘stuff that it isn’t that hard to imagine happening in the real world’. Yet my impression is that for people worried about extinction risk from artificial intelligence, strategies under the heading ‘actively slow down AI progress’ have historically been dismissed and ignored (though ‘don’t actively speed up AI progress’ is popular).
Original article:
https://forum.effectivealtruism.org/posts/vwK3v3Mekf6Jjpeep/let-s-think-about-slowing-down-ai-1
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
In previous pieces, I argued that there's a real and large risk of AI systems' aiming to defeat all of humanity combined - and succeeding.
I first argued that this sort of catastrophe would be likely without specific countermeasures to prevent it. I then argued that countermeasures could be challenging, due to some key difficulties of AI safety research.
But while I think misalignment risk is serious and presents major challenges, I don’t agree with sentiments along the lines of “We haven’t figured out how to align an AI, so if transformative AI comes soon, we’re doomed.” Here I’m going to talk about some of my high-level hopes for how we might end up avoiding this risk.
Original article:
https://forum.effectivealtruism.org/posts/rJRw78oihoT5paFGd/high-level-hopes-for-ai-alignment
Narrated by Holden Karnofsky for the Cold Takes blog.
Original article:
https://forum.effectivealtruism.org/posts/8bcPkqdLYG78YbnTh/ea-and-lw-forums-weekly-summary-5th-dec-11th-dec-22
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
Key Takeaways
Original article:
https://forum.effectivealtruism.org/posts/tnSg6o7crcHFLc395/the-welfare-range-table
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Hiya folks! I'm Patrick McKenzie, better known on the Internets as patio11. (Proof.) Long-time-listener, first-time-caller; I don't think I would consider myself an EA but I've been reading y'all, and adjacent intellectual spaces, for some time now.
Epistemic status: Arbitrarily high confidence with regards to facts of the VaccinateCA experience (though speaking only for myself), moderately high confidence with respect to inferences made about vaccine policy and mechanisms for impact last year, one geek's opinion with respect to implicit advice to you all going forward.
A Thing That Happened Last Year
As some of the California-based EAs may remember, the rollout of the covid-19 vaccines in California and across the U.S. was... not optimal. I accidentally ended up founding a charity, VaccinateCA, which ran the national shadow vaccine location information infrastructure for 6 months.
The core product at the start of the sprint, which some of you may be familiar with, was a site which listed places to get the vaccine in California, sourced by a volunteer-driven operation to conduct an ongoing census of medical providers by calling them. Importantly, that was not our primary vector for impact, though it was very important to our trajectory.
Original article:
https://forum.effectivealtruism.org/posts/NkPghabDd54nkG3kX/some-observations-from-an-ea-adjacent-charitable-effort
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Original article:
https://forum.effectivealtruism.org/posts/LdEPDqyZvucQkxhWH/ea-and-lw-forums-weekly-summary-28th-nov-4th-dec-22
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
Key Takeaways:
Original article:
https://forum.effectivealtruism.org/posts/Mfq7KxQRvkeLnJvoB/why-neuron-counts-shouldn-t-be-used-as-proxies-for-moral
This is a linkpost for https://docs.google.com/document/d/1p50vw84-ry2taYmyOIl4B91j7wkCurlB/edit?rtpof=true&sd=true
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Long and rough notes on Effective Altruism (EA). Written to help me get to the bottom of several questions: what do I like and think is important about EA? Why do I find the mindset so foreign? Why am I not an EA? And to start me thinking about: what do alternatives to EA look like? The notes are not aimed at effective altruists, though they may perhaps be of interest to EA-adjacent people. Thoughtful, informed comments and corrections welcome (especially detailed, specific corrections!) - see the comment area at the bottom.
"Using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis": that's the idea at the foundation of the Effective Altruism (EA) ideology and movement. Over the past two decades it has gone from being an idea batted about by a few moral philosophers to being a core part of the life philosophy of thousands or tens of thousands of people, including several of the world's most powerful and wealthy individuals. These are my rough working notes on EA. The notes are long and quickly written: disorganized rough thinking, not a polished essay.
Original article:
https://michaelnotebook.com/eanotes/
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
This post introduces a framework for thinking about population ethics: “population ethics without axiology.” In its last section, I sketch the implications of adopting my framework for evaluating the thesis of longtermism. Before explaining what’s different about my proposal, I’ll describe what I understand to be the standard approach it seeks to replace, which I call “axiology-focused.”
Original article:
https://forum.effectivealtruism.org/posts/dQvDxDMyueLyydHw4/population-ethics-without-axiology-a-framework
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
In “How Likely is World War III?”, Stephen suggested the chance of an extinction-level war occurring sometime this century is just under 1%. This was a simple, rough estimate, made in the following steps:
In both the 1940s and 1950s, well-meaning and good people – the brightest of their generation – were convinced they were in an existential race with an expansionary, totalitarian regime. Because of this belief, they advocated for and participated in a ‘sprint’ race: the Manhattan Project to develop a US atomic bomb (1939-1945); and the ‘missile gap’ project to build up a US ICBM capability (1957-1962). These were both based on a mistake, however - the Nazis decided against a Manhattan Project in 1942, and the Soviets decided against an ICBM build-up in 1958. The main consequence of both was to unilaterally speed up dangerous developments and increase existential risk. Key participants, such as Albert Einstein and Daniel Ellsberg, described their involvement as the greatest mistake of their life.
Our current situation with AGI shares certain striking similarities and certain lessons suggest themselves: make sure you’re actually in a race (information on whether you are is very valuable), be careful when secrecy is emphasised, and don’t give up your power as an expert too easily.
Original article:
https://forum.effectivealtruism.org/posts/cXBznkfoPJAjacFoT/are-you-really-in-a-race-the-cautionary-tales-of-szilard-and
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
What We Owe The Future (WWOTF) by Will MacAskill has recently been released with much fanfare. While I strongly agree that future people matter morally and we should act based on this, I think the book isn’t clear enough about MacAskill’s views on longtermist priorities, and to the extent it is it presents a mistaken view of the most promising longtermist interventions.
I argue that MacAskill:
This essay is a reconciliation of moral commitment and the good life. Here is its essence in two paragraphs:
Totalized by an ought, I sought its source outside myself. I found nothing. The ought came from me, an internal whip toward a thing which, confusingly, I already wanted – to see others flourish. I dropped the whip. My want now rested, commensurate, amidst others of its kind – terminal wants for ends-in-themselves: loving, dancing, and the other spiritual requirements of my particular life. To say that these were lesser seemed to say, “It is more vital and urgent to eat well than to drink or sleep well.” No – I will eat, sleep, and drink well to feel alive; so too will I love and dance as well as help.
Once, the material requirements of life were in competition: If we spent time building shelter it might jeopardize daylight that could have been spent hunting. We built communities to take the material requirements of life out of competition. For many of us, the task remains to do the same for our spirits. Particularly so for those working outside of organized religion on huge, consuming causes. I suggest such a community might practice something like “fractal altruism,” taking the good life at the scale of its individuals out of competition with impact at the scale of the world.
Original article:
https://forum.effectivealtruism.org/posts/AjxqsDmhGiW9g8ju6/effective-altruism-in-the-garden-of-ends
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
The long-term future of intelligent life is currently unpredictable and undetermined. In the linked document, we argue that the invention of artificial general intelligence (AGI) could change this by making extreme types of lock-in technologically feasible. In particular, we argue that AGI would make it technologically feasible to (i) perfectly preserve nuanced specifications of a wide variety of values or goals far into the future, and (ii) develop AGI-based institutions that would (with high probability) competently pursue any such values for at least millions, and plausibly trillions, of years.
The rest of this post contains the summary (6 pages), with links to relevant sections of the main document (40 pages) for readers who want more details.
Original article:
https://forum.effectivealtruism.org/posts/KqCybin8rtfP3qztq/agi-and-lock-in
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
This is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems.
To start, here’s an outline of what I take to be the basic case:
I. If superhuman AI systems are built, any given system is likely to be ‘goal-directed’
II. If goal-directed superhuman AI systems are built, their desired outcomes will probably be about as bad as an empty universe by human lights
III. If most goal-directed superhuman AI systems have bad goals, the future will very likely be bad
Original article:
https://forum.effectivealtruism.org/posts/zoWypGfXLmYsDFivk/counterarguments-to-the-basic-ai-risk-case
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Understanding the relationship between wellbeing and economic growth is a topic that is of key importance to Effective Altruism (e.g. see Hillebrandt and Hallstead, Clare and Goth). In particular, a key disagreement regards the Easterlin Paradox; the finding that happiness varies with income across countries and between individuals, but does not seem to vary significantly with a country’s income as it changes over time. Michael Plant recently wrote an excellent post summarizing this research. He ends up mostly agreeing with Richard Easterlin’s latest paper arguing that the Easterlin Paradox still holds; suggesting that we should look to approaches other than economic growth to boost happiness. I agree with Michael Plant that life satisfaction is a valid and reliable measure, that it should be a key goal of policy and philanthropy, and that boosting income does not increase it as much as we might naively expect. In fact, we at Founders Pledge highly value and regularly use Michael Plant’s and Happier Lives Institute’s (HLI) research; and we believe income is only a small part of what interventions should aim at. However, my interpretation of the practical implications of Easterlin’s research differ from Easterlin’s in three ways which I argue in this post.
Original article:
https://forum.effectivealtruism.org/posts/coryFCkmcMKdJb7Pz/does-economic-growth-meaningfully-improve-well-being-an
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
I want to know what’s going on in the world. I’m a human; I’m interested in what other humans are up to; I value them, care about their triumphs and mourn their deaths.
But:
I would really like there to be a scope sensitive news provider which was making a good faith attempt to report on the things which most matter in the world. But as far as I know, this doesn’t exist.
In the absence of such a provider, I’ve spent a small amount of time trying to find out some basic context on what happens in the world on the average day. I think of this as a bit like a cheat sheet: some information to have in the back of my mind when reading whatever regular news stories are coming at me, to ground me in something that feels a bit closer to what’s actually going on.
Original article:
https://forum.effectivealtruism.org/posts/rXYW9GPsmwZYu3doX/what-happens-on-the-average-day
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
We will never know their names. The first victim could not have been recorded, for there was no written language to record it. They were someone’s daughter, or son, and someone’s friend, and they were loved by those around them. And they were in pain, covered in rashes, confused, scared, not knowing why this was happening to them or what they could do about it — victims of a mad, inhuman god. There was nothing to be done — humanity was not strong enough, not aware enough, not knowledgeable enough, to fight back against a monster that could not be seen.
Original article:
https://forum.effectivealtruism.org/posts/jk7A3NMdbxp65kcJJ/500-million-but-not-a-single-one-more
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
In this note I’ll summarize the bio-anchors report, describe my initial reactions to it, and take a closer look at two disagreements that I have with background assumptions used by (readers of) the report.
This report attempts to forecast the year when the amount of compute required to train a transformative AI (TAI) model will first become available, as the year when a forecast for the amount of compute required to train TAI in a given year will intersect a forecast for the amount of compute that will be available for a training run of a single project in a given year.
Original article:
https://docs.google.com/document/d/1_GqOrCo29qKly1z48-mR86IV7TUDfzaEXxD3lGFQ8Wk/edit#
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
So far, long-termist efforts to change the trajectory of the world focus on far-off events. This is on the assumption that we foresee some important problem and influence its outcome by working on the problem for longer. We thus start working on it sooner than others, we lay the groundwork for future research, we raise awareness, and so on.
Many longtermists propose that we now live at the “hinge of history”, usually understood on the timescale of critical centuries, or critical decades. But ”hinginess” is likely not constant: some short periods will be significantly more eventful than others. It is also possible that these periods will present even more leveraged opportunities for changing the world’s trajectory.
These “maximally hingey” moments might be best influenced by sustained efforts long before them (as described above). But it seems plausible that in many cases, the best realistic chance to influence them is “while they are happening”, via a concentrated effort at that moment.
Original article:
https://forum.effectivealtruism.org/posts/sgcxDwyD2KL6BHH2C/case-for-emergency-response-teams
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Shrimp Welfare Project produced this report to guide our decision making on funding for further research into shrimp welfare and on which interventions to allocate our resources. We are cross-posting this on the forum because we think it may be useful to share the complexity of understanding the needs of beneficiaries who cannot communicate with us. We also hope it will be useful for other organisations working on shrimp welfare, and it’s also hopefully an interesting read!
Original article:
https://forum.effectivealtruism.org/posts/nGrmemHzQvBpnXkNX/what-matters-to-shrimps-factors-affecting-shrimp-welfare-in
Narrated for the Effective Altruism Forum by TYPE III AUDIO.
Original article:
https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/cwa5m5pJQh857GE7C
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
Original article:
https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/tm3RMfxetLsmcwftQ
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
Original article:
https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/YxiXZcddn4kEqGdr9
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
Original article:
https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/Hi5z6tm9d2keHALgv
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
Original article:
https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/pmJRXG3cTgrt779Ep
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
Original article:
https://forum.effectivealtruism.org/s/W4fhpuN26naxGCBbN/p/qFqNaLAkMdmwKNBbs
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
Original article:
https://forum.effectivealtruism.org/posts/tokGikSg3fSJun4Lw/ea-and-lw-forums-weekly-summary-19-25-sep-22
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.
Note from Coleman Snell:
Thanks for listening to the very first episode of EA Forum Summaries Weekly! Please note that this podcast will only contain summaries of EA Forum posts, and not LessWrong posts. This is to keep the episodes short & sweet for a weekly series. Other options would include raising the karma threshold on both.
Original article:
https://forum.effectivealtruism.org/posts/5wzhWsHrZSLwXxc5q/ea-and-lw-forums-weekly-summary-12-18-sep-22
This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.
Narrated by Coleman Jackson Snell. Summaries written by Zoe Williams (Rethink Priorities).
Published by TYPE III AUDIO on behalf of the Effective Altruism Forum.