June 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements. Please share your thoughts.

Homearrow rightPodcasts

LW - [Linkpost] Sam Altman's 2015 Blog Posts Machine Intelligence Parts 1 & 2 by Olivia Jimenez

LessWrong (Curated)

Audio version of the posts shared in the LessWrong Curated newsletter.

https://www.lesswrong.com/posts/kDjKF2yFhFEWe4hgC/announcing-the-lesswrong-curated-podcast

Subscribe

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Sam Altman's 2015 Blog Posts Machine Intelligence Parts 1 & 2, published by Olivia Jimenez on April 28, 2023 on LessWrong.
I'm often surprised more people haven't read Open AI CEO Sam Altman's 2015 blog posts Machine Intelligence Part 1 & Part 2. In my opinion, they contain some of the most strong, direct, and clear articulations of why AGI is dangerous from a person at an AGI company.
(Note that the posts were published before OpenAI was founded. There's a helpful wiki of OpenAI history here.)
Hence: a linkpost. I've copied both posts directly below for convenience. I've also bolded a few of the lines I found especially noteworthy.
Machine intelligence, part 1
This is going to be a two-part post—one on why machine intelligence is something we should be afraid of, and one on what we should do about it. If you’re already afraid of machine intelligence, you can skip this one and read the second post tomorrow—I was planning to only write part 2, but when I asked a few people to read drafts it became clear I needed part 1.
WHY YOU SHOULD FEAR MACHINE INTELLIGENCE
Development of superhuman machine intelligence (SMI) [1] is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could. Also, most of these other big threats are already widely feared.
It is extremely hard to put a timeframe on when this will happen (more on this later), and it certainly feels to most people working in the field that it’s still many, many years away. But it’s also extremely hard to believe that it isn’t very likely that it will happen at some point.
SMI does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out. Certain goals, like self-preservation, could clearly benefit from no humans. We wash our hands not because we actively wish ill towards the bacteria and viruses on them, but because we don’t want them to get in the way of our plans.
(Incidentally, Nick Bostrom’s excellent book “Superintelligence” is the best thing I’ve seen on this topic. It is well worth a read.)
Most machine intelligence development involves a “fitness function”—something the program tries to optimize. At some point, someone will probably try to give a program the fitness function of “survive and reproduce”. Even if not, it will likely be a useful subgoal of many other fitness functions. It worked well for biological life. Unfortunately for us, one thing I learned when I was a student in the Stanford AI lab is that programs often achieve their fitness function in unpredicted ways.
Evolution will continue forward, and if humans are no longer the most-fit species, we may go away. In some sense, this is the system working as designed. But as a human programmed to survive and reproduce, I feel we should fight it.
How can we survive the development of SMI? It may not be possible. One of my top 4 favorite explanations for the Fermi paradox is that biological intelligence always eventually creates machine intelligence, which wipes out biological life and then for some reason decides to makes itself undetectable.
It’s very hard to know how close we are to machine intelligence surpassing human intelligence. Progression of machine intelligence is a double exponential function; human-written programs and computing power are getting better at an exponential rate, and self-learning/self-improving software will improve i...