• Posts
  • RSS
  • ◂◂RSS
  • Contact

  • Three doors problem

    December 20th, 2011
    math, probability
    Humans are notoriously bad at the three doors problem. [1] To most people it is intuitively obvious that once a door is eliminated their odds are 50-50 because there are two remaining doors. This is, unfortunately, wrong: you do twice as well to switch.

    The person who first showed me this problem failed to convince me that I should switch; that took testing it with pennies and cups. Seeing that the strategy of "always switch" got twice as many pennies as the strategy of "keep with your initial guess", however, I knew I must have been thinking about it wrong. [2]

    Since then, in various conversations with people who were sure switching didn't help, I've tried many times to describe how the odds for switching can be 2/3. I've come up with many explanations that I think would have convinced me, but they don't seem to convince others. [3]

    Not only are we bad at this, we're worse than pigeons: given very similar experimental setups, pigeons dramatically outperform humans:

    birds adjusted their probability of switching and staying to approximate the optimal strategy. Replication of the procedure with human participants showed that humans failed to adopt optimal strategies, even with extensive training.

    My best guess as to why we're bad at this is that some higher level reasoning is getting in the way of simple reward counting. Which makes me wonder: if we gave this test to chimps, would them doing well or poorly be evidence of intelligence?


    [1] With boxes, because I like boxes: There are three boxes. Something you want is in one of them, nothing is in the other two. You pick a box, but can't open it yet. The way the game works is that after you have made your choice, one of the two boxes you didn't choose opens to reveal that it is empty. You then are given the option to keep your current box or switch to the other still-closed box. People call this the "monty hall problem" after a game show that didn't work this way.

    [2] If you are trying this, you might want to use a program instead of physical items, because playing it out by hand is so slow a lot of people get bored and leave before you have enough data to see either way. Quickly looking online, I can't find a good one.

    [3] I actually think I've never successfully explained this to anyone without an experimental component. Even with testing it, they still tend to need a lot of prompting.

    Comment via: google plus, facebook

    Recent posts on blogs I like:

    Interview with Kat Woods: decision-making about having kids

    Realizing what you don't want The post Interview with Kat Woods: decision-making about having kids appeared first on Otherwise.

    via Otherwise July 5, 2022

    Decision theory and dynamic inconsistency

    Here is my current take on decision theory: When making a decision after observing X, we should condition (or causally intervene) on statements like “My decision algorithm outputs Y after observing X.” Updating seems like a description of something you do…

    via The sideways view July 3, 2022

    10x (engineer, context) pairs

    Your actual output depends on a lot more than just how quickly you finish a given programming task. Everything besides the literal coding depends deeply on the way you interact with the organization around you.

    via benkuhn.net June 9, 2022

    more     (via openring)


  • Posts
  • RSS
  • ◂◂RSS
  • Contact