Relative and Absolute Benefit

June 17th, 2014
ea
Someone comes to you claiming to have an intervention that dramatically improves life outcomes. They tell you that all people have some level of X, determined by a mixture of genetics and biology, and they show you evidence that their intervention is cheap and effective at increasing X and separately that higher levels of X are correlated with greater life success. You're skeptical, so they show you there's a strong dose response effect, but you're still not happy about the correlational nature of their evidence. So they go off and do a randomized controlled trial, applying their intervention to randomly chosen individuals and comparing their outcomes with people who aren't supplied the intervention. The improvement still shows up, and with a large effect size!

What's missing is evidence that the intervention helps people in an absolute sense, instead of simply by improving their relative social position. For example, say X is height, we're just looking at men, and we're getting them to wear lifts in their shoes. While taller men do earn more, and are generally more successful along various metrics, we don't think this is because being taller makes you smarter, healthier, or more conscientious. If all people became 1" taller it would be very inconvenient but we wouldn't expect this to affect people's life outcomes very much.

Attributes like X are also weird because they put parents in a strange position. If you're mostly but not completely altruistic you might want more X for your own child but think that campaigns to give X to other people's children are not useful: if X is just about relative position then for every person you "bring up" that way other people are slightly brought down in a way that balances the overall outcome to "basically no effect".

College degrees, especially in fields that don't directly teach skills in demand by employers, may belong in this category. Employers hire college graduates over high-school graduates, and this hiring advantage does remain as you increase college enrollment, but if another 10% of people get English degrees is everyone better off in aggregate?

Some interventions are pretty clearly not in this category. If an operation saves someone's life or cures them of something painful they're pretty clearly better off. The difference here is we have an absolute measurement of well-being, in this case "how healthy are you?", and we can see this remaining constant in the control group. Unfortunately, this isn't always enough: if our intervention was "take $1 from 10k randomly selected people and give that $10k it to one randomly selected person" we would see that the person gaining $10k was better off but not be able to see any harm to the other people because the change in their situation was too small to measure with our tests. Because each additional dollar is less valuable, however, we would expect this transfer to make the group as a whole worse off. So "absolute measures of wellbeing apparently remaining constant in the control group" isn't enough.

How do we get around this? While we can't run an experiment with half the world's people as "treatment" and the other half as "control," one thing we can do is look at isolated groups where we really can apply the intervention to a large fraction of the people. Take the height example. If we were to randomly make half the people in a treatment population 1/2" taller, and this treatment population was embedded in a much larger society, the positional losses in the non-treatment group would be too diffuse to measure. But if we limit to one small community with limited churn and apply the treatment to half the people, then if (as I expect) it's entirely a relative benefit we should see the control group do worse on absolute measurements of wellbeing.

Another way to avoid interventions that mostly give positional benefit is to keep mechanisms in mind. Height increase has no plausible mechanism for improving absolute wellbeing, while focused skills training does. This isn't ideal, because you can have non-intuitive mechanisms or miss the main way an intervention leads to your measured outcome, but it can still catch some of these.

What else can we do?

Comment via: google plus, facebook, lesswrong, r/smartgiving

Recent posts on blogs I like:

You Can Buy A Malaria Net

2024 election takes

via Thing of Things November 6, 2024

Steve Ballmer was an underrated CEO

There's a common narrative that Microsoft was moribund under Steve Ballmer and then later saved by the miraculous leadership of Satya Nadella. This is the dominant narrative in every online discussion about the topic I've seen and it's a commo…

via Posts on October 28, 2024

Inner dialogue, walking down the sidewalk

A discussion I have with myself a lot The post Inner dialogue, walking down the sidewalk appeared first on Otherwise.

via Otherwise October 10, 2024

more     (via openring)