June 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements. Please share your thoughts.

Homearrow rightPlaylists

[Week 1] “Intelligence Explosion: Evidence and Import” (Sections 3 to 4.1) by Luke Muehlhauser & Anna Salamon

AGI Safety Fundamentals: Alignment

Readings from the AI Safety Fundamentals: Alignment course.



Apple PodcastsSpotifyGoogle PodcastsRSS

It seems unlikely that humans are near the ceiling of possible intelligences, rather than simply being the first such intelligence that happened to evolve. Computers far outperform humans in many narrow niches (e.g. arithmetic, chess, memory size), and there is reason to believe that similar large improvements over human performance are possible for general reasoning, technology design, and other tasks of interest. As occasional AI critic Jack Schwartz (1987) wrote:

"If artificial intelligences can be created at all, there is little reason to believe that initial successes could not lead swiftly to the construction of artificial superintelligences able to explore significant mathematical, scientific, or engi-neering alternatives at a rate far exceeding human ability, or to generate plans and take action on them with equally overwhelming speed. Since man’s near-monopoly of all higher forms of intelligence has been one of the most basic facts of human existence throughout the past history of this planet, such developments would clearly create a new economics, a new sociology, and a new history."

Why might AI “lead swiftly” to machine superintelligence? Below we consider some reasons.

Original article:

Luke Muehlhauser, Anna Salamon

This article is featured on the AGI Safety Fundamentals: Alignment course curriculum.

Narrated by TYPE III AUDIO on behalf of BlueDot Impact.

Share feedback on this narration.