|March 17th, 2016|
In his post on the Multiple Stage Fallacy, Yudkowsky describes a way to screw up predictive modeling:
On a probability-theoretic level, the three problems at work in the usual Multiple-Stage Fallacy are as follows:
1. First and foremost, you need to multiply *conditional* probabilities rather than the absolute probabilities. When you're considering a later stage, you need to assume that the world was such that every prior stage went through. Nate Silver was probably - though I here critique a man of some statistical sophistication - Nate Silver was probably trying to simulate his prior model of Trump accumulating enough delegates in March through June, not imagining his *updated* beliefs about Trump and the world after seeing Trump be victorious up to March.
1a. Even if you're aware in principle that you need to use conjunctive probabilities, it's hard to update far *enough* when you imagine the pure hypothetical possibility that Trump wins stages 1-4 for some reason - compared to how much you actually update when you actually see Trump winning! (Some sort of reverse hindsight bias or something? We don't realize how much we'd need to update our current model if we were already that surprised?)
2. Often, people neglect to consider disjunctive alternatives - there may be more than one way to reach a stage, so that not *all* the listed things need to happen. This doesn't appear to have played a critical role in Nate's prediction here, but I've often seen it in other cases of the Multiple-Stage Fallacy.
3. People have tendencies to assign middle-tending probabilities. So if you list enough stages, you can drive the apparent probability of anything down to zero, even if you solicit probabilities from the reader.
3a. If you're a motivated skeptic, you will be tempted to list more 'stages'.
He then gives a post of mine as an example of how badly things can go with this approach:
For an absolutely ridiculous and egregious example of the Multiple-Stage Fallacy, see e.g. this page which commits both of the first two fallacies at great length and invites the third fallacy as much as possible: jefftk.com/p/breaking-down-cryonics-probabilities.
So, first, I respect Yudkowsky a lot. He's good at explaining things, and I'm glad I took the time to read through his sequences. Additionally, I agree that by misusing this multiple-stage approach you can confuse yourself and others, and end up thinking something is much less likely than it actually is. But (a) I don't think my post does this and (b) I think this approach is probably a good one to use in general.
So first, a little bit of point-by-point responding, since Yudkowsky gives my post as an example of getting all of these wrong.
1. First and foremost, you need to multiply *conditional* probabilities rather than the absolute probabilities.I agree, and that's what I did. In the linked post I wrote:
In order to deal with independence issues, all my probability guesses are conditional on everything above them not happening. Each of these things must go right, so this works. For example, society collapsing and my cryonics organization going out of business are very much not independent. So the probability assigned to the latter is the chance that society won't collapse, but my organization goes out of business anyway.Maybe he didn't see this because it was in a footnote?
1a. Even if you're aware in principle that you need to use conjunctive probabilities, it's hard to update far *enough* when you imagine the pure hypothetical possibility that ... stages 1-4 [happened] for some reasonYes. When you're trying to build a model in your head saying "if A happened, and then B happened, and then C happened, how likely would it be for D to happen" it's really hard to put yourself in a position where it's like you saw A, B, and C happen and you've fully internalized this surprising new information. I tried to handle this as well as I could in my post, for example considering that "it is too expensive to run me in simulation" wasn't that likely because if we got that far we would be very likely to have very cheap powerful computers, so I don't think my model is "ridiculous and egregious," but this is a real downside to this kind of modeling and a reason to take predictions you make this way less seriously.
2. Often, people neglect to consider disjunctive alternatives - there may be more than one way to reach a stage, so that not *all* the listed things need to happen.What sort of disjunction would be reasonable here? I mean, you could be revived physically instead of by scanning and emulating, but the cryoprotectants are toxic enough that my prediction for that succeeding would be under 0.1% all on its own. Other disjunctive paths I've seen (examples) are things like "we could be in a multiverse such that being cryopreserved, even if poorly, would increase the probability of other universes copying you into them" that seem even more unlikely to me.
All in all I used a straight-up conjunctive model because I see one main-line path from "sign up to get frozen" and "successful revival" with no signficant alternative paths.
3. People have tendencies to assign middle-tending probabilities. So if you list enough stages, you can drive the apparent probability of anything down to zero, even if you solicit probabilities from the reader.Yes, this is something you have to intentionally handle. If you would give 50% for "we go to war" and also give 50% each for "we go to war with Russia" and "we go to war with China" then your inconsistency means unless you think we would only go to war against both of them someone can say you think war is more than 50% likely. This was more of a problem in my first round of this, where (as Jim points out) I had enough categories of "other" that my probablity that it woul fail for "other reasons" was 75%. On the other hand, by the time I wrote this followup I had fixed this and in the comments Eliezer still considered my model to be a bias-manipulating "unpacking" trick.
3a. If you're a motivated skeptic, you will be tempted to list more 'stages'.I was definitely not a motivated skeptic. I wanted cryonics to work, I set a threshold probability before I built by model, and if it had come out the other way I would have signed up.
Overall my model isn't perfect, and the first version I wrote still needed work, but it's still how I would think about the question and "absolutely ridiculous and egregious" seems completely uncalled for.
Beyond just cryonics, though, there's a question of whether this approach is a good tool for making predictions. I think it is: when someone makes an estimate a large uncertain thing they don't have much to go on. Building this sort of model makes you think though the steps, and helps avoid the planning fallacy.
Yudkowsky gives two examples of this approach failing: Nate Silver writing about Trump's Six Stages of Doom and my post. To figure out whether it tends to help or hurt in general, it would be better to get a lot more examples. It turns out, though, that this is a very common tool for people to use when estimating the efficacy of a conversion funnel. You figure out what the steps are, get estimates for each step, and that gives you an overall conversion rate. These aren't perfect, but they do pretty well, and they do a lot better than trying to estimate a conversion rate without breaking it down.
Are there other examples of people using this method, yielding success or failure?
- Belief Listing Project: Giving
- A Right to Publicy
- Lyme Disease By County
- Cheap College via Marrying