Flavors of Utilitarianism

September 2nd, 2011
morality, utilitarianism
In utilitarianism you do what you think does the best job of making everyone happy [1]. Many questions all utilitarians agree on: is it good to kick strangers for no reason? (no) Should I help people who are suffering? (yes) Are there theoretical cases where major suffering for one person is better than minor suffering for huge enough numbers of people? (yes). But what about: should we try to avert threats to the existence of humanity as a whole? Or, should we try to make colonies? Or even, should we limit our population?

I see these as depending on two details of utilitarianism that people disagree on. The first is whether we're trying to maximize total or average happiness [2]. Consider a large population with medium average happiness against a small population with high average happiness. The large population is large enough that even though individual happiness isn't that high, total happiness is still higher than that of the small population. Which do we think would be a better population to be humanity? Someone maximizing total happiness would choose the large one, while someone maximizing average happiness would choose the small one.

I believe total happiness is what we should be maximizing. Average happiness gives unreasonable claims like that it's better to have one really happy person than a billion almost as happy people. Some people claim that total happiness leads to the "repugnant conclusion" that we should continue to increase population until there are huge numbers of people with very low but still positive happiness. I would argue that this isn't really that bad. When we say 'positive happiness' we include both suffering and joy. So someone with low positive happiness would believe that on balance their life was worth living and they are glad they got the chance to do so. Average utilitarianism would say that we should limit our population so as to have more resources per person and greater average happiness. Total utilitarianism says we should seek the population size that leads to greatest (total) happiness. If we think we are at the point where additional children decrease global happiness by increasing competition for scarce resources, then we should work to limit overpopulation, but not otherwise.

The other question is whether most people are happy or unhappy; is total happiness positive or negative? If we believe total happiness is positive and likely to remain so, then it would make sense to fund asteroid tracking research [3] to try to prevent everyone dying in an asteroid collision, removing future potential for happiness. If we believe it's negative and expect it to stay so, then we should spend our money on making sad people happier instead of trying to prevent human extinction. [4]

[1] "maximizes utility over all people"

[2] wikipedia: average and total utilitarianism

[3] Assuming this is the most cost effective existential risk to be trying to prevent.

[4] At the extreme, someone who believed total happiness was unavoidably negative should work to quickly and painlessly kill everyone. Perhaps researching bioweapons would make sense. The main way this would fail, though, is killing a lot of people but not everyone, which would dramatically increasing suffering. Also, I disagree (a) that happiness is net negative now and (b) that we should expect to to be in the future, so I think this would be a really bad idea.

Comment via: google plus, facebook

Recent posts on blogs I like:

Pay For Fiction

Against piracy

via Thing of Things February 29, 2024

When Nurses Lie to You

When the nurse comes to give you the flu shot, they say it won't hurt at all, right? And you trust them. Then they give you the shot, and it hurts! They lied to you. A lot of nurses lie to children about shots and blood draws. Part of it is they probabl…

via Lily Wise's Blog Posts February 28, 2024

How I build and run behavioral interviews

This is an adaptation of an internal doc I wrote for Wave. I used to think that behavioral interviews were basically useless, because it was too easy for candidates to bullshit them and too hard for me to tell what was a good answer. I’d end up grading eve…

via benkuhn.net February 25, 2024

more     (via openring)