|July 11th, 2014|
Toby Ord is working on  understanding "moral trade," cases where least one of the participants wants a trade for its moral benefit. For example, say I'm vegetarian while you give a percentage of your salary to GiveDirectly. While I might not think we have an obligation to help poor people in the developing world and you might not think being vegetarian is useful, we could agree, say, that I will donate to your charity while you will become vegetarian. For purely selfish reasons this would be a bad trade—I have to give up some money, you have to give up eating meat—but once we include the altruistic aspects we both consider it superior to the current situation, so we make the trade.
One issue with this kind of trading is "factual trust": will you actually go vegetarian as promised? If we're friends we may trust each other enough not to worry about this, but otherwise we might need some system where we both prove to the other that we're still doing this. While maintaining factual trust might take some work, this is a sort of problem that can also come up in standard trade and is amenable to various kinds of solutions: transparency, reporting, audits, etc.
A trickier issue, however, is "counterfactual trust": would you have gone vegetarian anyway?  This can also come up in regular trade, but it's much rarer. Like, if you're buying some milk you don't worry that the store would actually have just given you the milk instead. In fact, almost all cases of regular trade where counterfactual trust is a big consideration are cases of extortion. For example, say we live by the ocean and you're considering adding a second story to your house, blocking my view of the water. (What a nice view... Shame if something were to happen to it...) If you offer not to expand your house in exchange for a payment from me, that's extortion but it also raises issues of counterfactual trust. Were you really thinking of expanding your house, or just proposing it to get me to pay you off? I want to know what will happen if I do or don't pay you, which is a counterfactual question.
In the case where you really are planning to expand your house, but you would get less value out of that expansion than I currently get from my ocean view, we're both better off if I pay you instead. So why do we prohibit this trade as 'extortion' when it would make both of us better off? I think it has to do with incentives and the difficulty of counterfactual trust. It's very hard for me to know how much you're thinking of expanding your house because you actually want to, vs as a threat to get me to pay you.
This is also an issue in the veg-for-antipoverty trade proposed above. If there were a well functioning market in moral goods like vegetarianism, with lots of people buying and selling vegetarian-years, then it might well be that people were eating meat only so that they could then trade the promise not to eat it. The incentives get weird, and it looks like they lead people to do worse and worse (legal) things as leverage.
This counterfactual trust issue doesn't just apply to novel moral trades like veg-for-antipoverty but also to common moral trades like giving to charity. When a charity says I should give to them because they save a life for $X, that's a counterfactual claim: donate $X and there will be roughly N lives, donate $0 and there will be roughly N-1. But their incentives are to give the lowest number for $X, so they look effective and get lots of donations, not to give the most accurate. If they have lots of programs and are only advertising the one that looks best, that's a question of simple factual trust—are they spending the money on what I thought they would—but if they have limited room for more funding then that's a problem of counterfactual trust. How do I know someone else won't come along and fund this if I don't?
This isn't entirely resolved, but because the incentives aren't too bad it seems like with charities we can mostly deal with it by promoting good evaluation that can track both program effectiveness and room for more funding.
(This is a weak argument for giving to charities that are likely to also be supported by people with similar values. If I give to Charity X, this slightly brings forward the time when they will hit their limit of room for more funding, and if this is being tracked by other funders then it could free up a bit of their money. If we have similar values and ways of deciding where to give then this doesn't bother me, but if X was some kind of compromise charity that people of widely differing views could agree on, then freeing up their money for other things might be pretty bad.)
 This is why when I wrote about paying with donations I said "it's money going to the organization of my choice that they wouldn't otherwise get". Without this you have the problem that if I agree to do X in exchange for a donation of $Y to GiveWell's top charity this could be an $X you're already planning to donate.
- Local Action and Remote Donation
- Personal Consumption Changes As Charity
- Charities and Waste
- Against Singular Ye
- Optimizing Looks Weird