|August 12th, 2015|
I try to be an effective altruist: I'm trying to do as much good as possible with my altruistic choices.  So say I'm donating to charity X that works on one cause, and you come along and point out that charity Y that also works on this cause gets more done for the money. I'd be glad you told me! I'd want to look into your reasoning and make sure I agree, but if that all checks out I'd much rather donate to charity Y.
The same thing holds if your charity Y is in a different "cause". For example, if I was donating to a charity that builds wells and you propose I give to one that distributes bednets, then unless I think wells are valuable in and of themselves I should still consider charity Y. When comparing within a narrowly defined cause it's easier to make comparisons (charity A digs equivalent wells for half the cost charity B needs), but it's worth the effort to get things on the same footing for comparison because different ways of trying to help people can vary massively in their effectiveness. (Eradicating smallpox is one of the best things humanity has ever done, but if I'd been only looking at malaria fighting I wouldn't have considered switching.)
This means when you have a group of people together who are trying to figure out how they can most help they tend to spend a lot of time talking to each other about how different causes compare. A lot of the cross-cause judgments are going to depend on questions where there's no consensus: how do future people matter compared to current people? How much do animals matter? How capable are people of making progress when they don't have good feedback loops? How do we compare saving lives to reducing suffering? Are interventions that increase the population good or bad in themselves? Should we work on things with a small chance of success but where success is very valuable?
Situations where people have thought about the same information and come to the different conclusions are really helpful for learning what matters to us and finding reasoning mistakes we may have made. We'd all like everyone to end up on the same page, so you get conversations where people try to convince each other and update their own understanding. When done thoughtfully this looks like "from what I know about you and your values I'd expect you to support X because Y but you said you supported Z; what am I missing?" and when done poorly this looks like "why are you wasting your time on Z when X is so much more important!"
Effective altruism is cause neutral in the sense that we're looking for the best ways to make the world better, whatever they happen to be. If you came to your current altruistic approach by considering the various ways you could help and taking the one that seemed best, please don't be put off EA by overconfident people dismissing your plan in favor of theirs. EA is and must be a broad tent because our knowledge about how to make things better is still early and incomplete.
 People can have other valid reasons to donate their time or money. Perhaps I find it fulfilling to help people who have suffered things similar to the things I've suffered, even though I know there are much worse things out there. Or perhaps I volunteer because it gives me something interesting to do where I meet people.
- OK to Have Kids?
- Make Buses Dangerous
- Haiti and Disaster Relief
- Undisabling A Keyboard's Internal Speakers
- A Right to Publicy