May 2023: Welcome to the alpha release of TYPE III AUDIO.
Expect very rough edges and very broken stuff—and daily improvements.
Please share your thoughts, but don't share this link on social media, for now.

Homearrow rightPodcasts

LW - Communicating effectively under Knightian norms by Richard Ngo

Radio Bostrom

Audio narrations of academic papers by Nick Bostrom.

Subscribe

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Communicating effectively under Knightian norms, published by Richard Ngo on April 3, 2023 on LessWrong.
tl;dr: rationalists concerned about AI risk often make claims that others consider not just unjustified, but unjustifiable using their current methodology, because of high-level disagreements about epistemology. If you actually want to productively discuss AI risk, make claims that can be engaged with by others who have a wide range of opinions about the appropriate level of Knightian uncertainty.
I think that many miscommunications about AI risk are caused by a difference between two types of norms for how to talk about the likelihoods of unprecedented events. I'll call these "inside view norms" versus "Knightian norms", and describe them as follows:
Inside view norms : when talking to others, you report your beliefs directly, without adjusting for "Knightian uncertainty" (i.e. possible flaws or gaps in your model of the world that you can't account for directly).
Knightian norms: you report beliefs adjusted for your best estimate of the Knightian uncertainty. For example, if you can't imagine any plausible future in which humanity and aliens end up cooperating with each other, but you think this is a domain which faces heavy Knightian uncertainty, then you might report your credence that we'll ever cooperate with aliens as 20%, or 30%, or 10%, but definitely nowhere near 0.
I'll give a brief justification of why Knightian norms seem reasonable to me, since I expect they're counterintuitive for most people on LW. On a principled level: when reasoning about complex domains like the future, the hardest part is often "knowing the right questions to ask", or narrowing down on useful categories at all. Some different ways in which a question might be the wrong one to ask:
The question might have important ambiguities. For example, consider someone from 100 years ago asking "will humans be extinct in 1000 years?" Even for a concept like extinction that seems very black-and-white, there are many possible futures which are very non-central examples of either "extinct" or "not extinct" in the questioner's mind (e.g. all humans are digital; all humans are dramatically genetically engineered; all humans are merged with AIs; etc). And so it'd be appropriate to give an answer like "X% yes, Y% no, Z% this is the wrong question to ask".
The question might be confused or ill-posed. For example, "how heavy is phlogiston?"
You might be unable to conceptualize the actual answer. For example, suppose someone from 200 years ago asks "will physics be the fastest-moving science in the year 2023?" They think about all the sciences they know of, and all the possible future sciences they can imagine, and try to assign credences to them being the fastest-moving. But they'd very likely just totally fail to conceptualize the science that has turn out to be the fastest-moving: computer science (and machine learning more specifically). Even if they reason at a meta level "there are probably a bunch of future sciences I can basically not predict at all, so I should add credence to 'no'", the resulting uncertainty is Knightian in the sense that it's generated by reasoning about your ignorance rather than your actual models of the world.
I therefore consider Knightian norms to be appropriate when you're reasoning about a domain in which these considerations seem particularly salient. I give some more clarifications at the end of the post (in particular on why I think Knightian norms are importantly different from modesty norms). However, I'm less interested in debating the value of Knightian norms directly, and more interested in their implications for how to communicate. If one person is following inside view norms and another is following Knightian norms, that can cause serious miscommunication...