|August 11th, 2015|
I gave a talk at Effective Altruism Global on why effective altruists should prioritize global poverty work. Here's a text version of that talk, following the same outline. It's probably closer to what I think than what I said out loud, but is still not as well grounded as I'd like.
Before we begin, I should just warn you that there was a bit of a miscommunication over this talk. You see, I was expecting to be on a panel. My fault, not the organizers'.  So when I found out about two hours ago that this was actually supposed to be a talk I went upstairs, skipped lunch, and spent the last two hours writing. So I haven't done a run through and this might be a little short. Please be kind?
So: why global poverty? The main reason is that you can do a huge amount to help other people through the best global poverty organizations. We're here in a rich part of a rich country, on the extreme-lucky end of a hugely imbalanced wealth distribution. We're in a position where our money can go a really long way to help people at the other end of this distribution.
As an illustration, let's look at some marginal cost figures. In the UK, if there's a new medical procedure they're considering whether the National Health Service should fund, they have a threshold where things that give people a year of healthy life for less than $X are worth it, and for more than $X are not worth it. That threshold, in USD, is about $50K. Similarly, the US EPA might be deciding whether to require power plants to install a new kind of pollution reducing device that will cost a bunch of money but also make people healthier. For one fewer life lost, they think it's worth it to spend up to $9.1M more. Other US departments have similar limits: $7.9M for the FDA, $9.4M for the Department of Transportation. (more)
What this means is that the marginal life saved by the government, the point at which they go "no, that's just too expensive" is something like $9M or $50K/year.  Poor countries, however, have much less money, which means there are many important public health interventions that aren't funded yet. In the US we've beat malaria, but in Malawi or the DRC it's really valuable to have bednets to sleep under. In the US children in even the poorest states rarely have intestinal parasites, but in Ethiopia or Mozambique parasites are so widespread that it makes sense to give deworming medication to all children through schools. Bednets and deworming tablets are cheap, but of course not everyone who receives one would otherwise get malaria or have severe parasitic infections. GiveWell calculates a very rough cost-effectiveness of ~$2.3k per life saved for bednets and ~$75 per healthy year of life for deworming.
These are shockingly—embarrassingly—low numbers. Giving someone a year of healthy life for just $75? And we haven't just done it already? Covered all these costs for the most cost-effective interventions until only much more expensive ones remained unfunded? It hurts to say it, but we haven't. You can do a lot to help here.
It is important, though, to choose the best global poverty charities, as opposed to typical ones, since the best ones are likely much much better than average ones. Eva and Michael [the next two speakers] will talk more about charity evaluation and picking the best charities.
But: this is effective altruism. It's not enough to know that you can do a lot of good in global poverty. The question is, does this do the most good?
That's a really high bar. Not just better than average, better than everything else you could be doing! So what are the contenders? Talk to people here and the main candidates for something more pressing than global poverty are existential risk, animal suffering, and movement growth. Let's compare global poverty to each of these.
So, existential risk. Everyone dies, the end. Clearly really bad. What are the risks? There's a bit of a continuum.
At one end we have risks like an asteroid hitting the earth. Cataloging asteroids and comets that might hit the earth at some point is something that people are working on, and actually is reasonably well understood. Because we have a pretty good understanding, and governments have a lot of sensible people, risks like this are reasonably well funded. So this end of the continuum is probably not high impact.
At the other end we have risks like the development of an artificial intelligence that destroys us through its indifference. Very few people are working on this, there's low funding, and we don't have much understanding of the problem. Neglectedness is a strong heuristic for finding causes where your contribution can go far, and this does seem relatively neglected. The main question for me, though, is how do you know if you're making progress?
First, a brief digression into feedback loops. People succeed when they have good feedback loops. Otherwise they tend to go in random directions. This is a problem for charity in general, because we're buying things for others instead of for ourselves. If I buy something and it's no good I can complain to the shop, buy from a different shop, or give them a bad review. If I buy you something and it's no good, your options are much more limited. Perhaps it failed to arrive but you never even knew you were supposed to get it? Or it arrived and was much smaller than I intended, but how do you know. Even if you do know that what you got is wrong, chances are you're not really in a position to have your concerns taken seriously.
This is a big problem, and there are a few ways around this. We can include the people we're trying to help much more in the process instead of just showing up with things we expect them to want. We can give people money instead of stuff so they can choose the things they most need. We can run experiments to see which ways of helping people work best. Since we care about actually helping people instead of just feeling good about ourselves, we not only can do these things, we need to do them. We need to set up feedback loops where we only think we're helping if we're actually helping.
Back to AI risk. The problem is we really really don't know how to make good feedback loops here. We can theorize that an AI needs certain properties not to just kill us all, and that in order to have those properties it would be useful to have certain theorems proved, and go work on those theorems. And maybe we have some success at this, and the mathematical community thinks highly of us instead of dismissing our work. But if our reasoning about what math would be useful is off there's no way for us to find out. Everything will still seem like it's going well.
With existential risk we have a continuum from well understood risks that don't need our marginal contribution to poorly understood risks where we don't have a way to find out if our contribution is reducing the risk. Maybe there's a sweet spot in between, where we can make progress and existing funding bodies are blind to the need? Future generations don't get to vote, so it wouldn't surprise me if governments systematically discount their interest. I'm not aware of any good candidates here, but if you'd like to find me after the talk I'd be interested in hearing about any.
Ok, so what about animals? In many ways animal suffering is actually similar to global poverty. In both you're trying to help others now, there's the potential for good feedback loops, and because you're much more privileged you're in a position to help a lot. Here it really comes down to how much you value humans vs animals. I think humans matter much more, so global poverty charities make more sense for me, but if you think we're closer than that then animal charities make more sense for you. The research isn't as far along on the best way to help animals, but you could fund Animal Charity Evaluators.
Additionally, there is a question of flow-through effects in deciding to help humans vs animals. If you keep a child from having intestinal parasites they suffer less. Similarly if you keep some equivalent number of pigs from being raised in factory farm conditions they suffer less. But the child will earn more, contribute more, help others, etc, while the pigs are not going to have the same kind of flow-through effects.  This is hard to measure and very speculative, so I wouldn't put too much weight on it, but if you're on the fence between animal charities and global poverty charities this should push you towards helping the people. 
The last contender we should look at is movement building. More EAs means more money for good projects, most people working on super important things, more thought on all these questions. There are somewhat limited feedback loops, since it's hard to tell what made the difference for someone to decide to become an EA, but there's enough we can make progress.
So yes, we should do this, we should put substantial effort into growing the movement. But this isn't the only thing we should do. We can't have an entirely meta movement that goes grow, grow, grow, build growth capacity, bring in people to bring in people, bigger and bigger, and then shift focus? Turn your giant optimized-for-growth movement into an optimized-for-helping one? Not going to work.
We need to do things that help people alongside growing the movement, and personally I try to divide my efforts 50-50. As I argued above, for the doing-good-now portion I think global poverty is our best shot. This isn't settled—EA is all about being open to the best options for helping others, whatever those causes happen to be—but today I think the best you can do to help people now is donate to GiveWell's top charities.
 I was asked to talk somewhat last minute to fill in for someone else, and I misunderstood what they were asking me to do. I didn't get the full speaker packet, and while reading back over the emails it's all kind of terse, but there's no mention of a panel. I was just not careful enough and assumed I was being asked to do the same sort of thing as Julia had been asked to do.
 You'll notice that if you divide $9M/life by 50K/year you get an implied lifespan of 180 years. No, I don't know where this discrepancy comes from.
 In fact the pig probably won't exist at all. There are only this many pigs in the world because we eat them, so if the world goes vegan there will be many fewer pigs. If you think the life of a pig on a factory farm is so bad it's not worth living, which makes some sense since they're really not treated well, then this is probably still worth it.
 It also probably pushes you toward developed world charities over developing world ones, but not by nearly enough to matter.
- Octaveless Bass Notes
- Giving vs Doing
- Giving Up On Privacy
- Insurance and Health Care
- Optimizing Looks Weird