June 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements. Please share your thoughts.
Audio version of the posts shared in the LessWrong Curated newsletter.
[Warning: the following dialogue contains an incidental spoiler for "Music in Human Evolution" by Kevin Simler. That post is short, good, and worth reading without spoilers, and this post will still be here if you come back later. It's also possible to get the point of this post by skipping the dialogue and reading the other sections.]
Pretty often, talking to someone who's arriving to the existential risk / AGI risk / longtermism cluster, I'll have a conversation like the following:
Tsvi: "So, what's been catching your eye about this stuff?"
Arrival: "I think I want to work on machine learning, and see if I can contribute to alignment that way."
T: "What's something that got your interest in ML?"
A: "It seems like people think that deep learning might be on the final ramp up to AGI, so I should probably know how that stuff works, and I think I have a good chance of learning ML at least well enough to maybe contribute to a research project."
This is an experiment with AI narration. What do you think? Tell us by going to t3a.is.