{"items": [{"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/281953478485958?comment_id=281980858483220", "anchor": "fb-281980858483220", "service": "fb", "text": "@Jim: maximizing the median does funny things.  It means, for example, that it's fine to kick anyone who is less happy than the median, because that won't move the median", "timestamp": "1314970662"}, {"author": "David&nbsp;Chudzicki", "source_link": "https://plus.google.com/106120852580068301475", "anchor": "gp-1314975737917", "service": "gp", "text": "Is there reason to think that happiness is the kind of thing with a non-arbitrary \u2018zero\u2019 point, so that positive and negative make sense? How would you decide what \u2018zero\u2019 means?\n<br>\n<br>\nAlso: I\u2019m curious about how non-human animals fit into your utilitarianism.", "timestamp": 1314975737}, {"author": "David", "source_link": "https://www.facebook.com/jefftk/posts/281953478485958?comment_id=282022768479029", "anchor": "fb-282022768479029", "service": "fb", "text": "Jeff, I don't think that's necessarily true, because people less happy than the median can interact with and affect people near the median.", "timestamp": "1314976029"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/281953478485958?comment_id=282032025144770", "anchor": "fb-282032025144770", "service": "fb", "text": "David, true.  Still, maximizing the median means that the only actions that matter are ones that move the median up or down, which won't be most of them.", "timestamp": "1314977342"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://plus.google.com/103013777355236494008", "anchor": "gp-1314977575913", "service": "gp", "text": "The zero point would be at whatever level it was better for a person to have lived than not lived.\n<br>\n<br>\nI don't think non-human animals matter.  I'm not entirely sure of this, so I think it's bad to hurt animals for fun, and I think it's especially bad to harm animals that are more like humans (apes, etc).", "timestamp": 1314977575}, {"author": "David&nbsp;Chudzicki", "source_link": "https://plus.google.com/106120852580068301475", "anchor": "gp-1314978264947", "service": "gp", "text": "Zero point: I'll have to think about this. It's not clear to me how to decide if \"it was better for a person to have lived than not lived.\" (Ask them, I guess?)\n<br>\n<br>\nNon-humans: I find that an odd perspective for a utilitarian. Have you read Peter Singer on the subject? I remember thinking his essay \"All Animals Are Equal\" [1] was pretty convincing. \n<br>\n<br>\n[1] \nhttp://www.animal-rights-library.com/texts-m/singer02.htm", "timestamp": 1314978264}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://plus.google.com/103013777355236494008", "anchor": "gp-1314978923172", "service": "gp", "text": "@David&nbsp;Chudzicki\n I agree that it's hard to find out how happy a person is, and whether it was better for them to have lived than not lived, but I don't think that means there's not such thing a happiness or a zero point.\n<br>\n<br>\nI've read some peter singer, but not that article.  If it has convinced you, would you mind impersonating it to convince me?", "timestamp": 1314978923}, {"author": "David&nbsp;Chudzicki", "source_link": "https://plus.google.com/106120852580068301475", "anchor": "gp-1314980865392", "service": "gp", "text": "Zero point: Sure it\u2019s fine for something to be hard to know. I just like to think about how (and whether!) it could be known, even in theory. I don\u2019t have a seriously considered epistemology, but I tend to think that if something is, in principle, unknowable, then it\u2019s actually meaningless. Similarly, I think \u201cHow could you know?\u201d is a helpful question in figuring out what something does mean.\n<br>\n<br>\nSinger: Here\u2019s what I remember of the basic idea, which I\u2019ve actually always believed (I may reread to see what else Singer had to say):\n<br>\n<br>\n1. At least some non-human animals clearly experience pain.\n<br>\n2. The pain that non-human animals is bad in the same way and for the same reason that human pain is bad. (They may not be capable of as much pain as humans, but that\u2019s another matter.)", "timestamp": 1314980865}, {"author": "Kevin", "source_link": "https://www.facebook.com/jefftk/posts/281953478485958?comment_id=282093308471975", "anchor": "fb-282093308471975", "service": "fb", "text": "1. One must consider the dimension of time. I think of happiness yield as a per annum thing, where yielding happiness for a longer time yields more total happiness. Also, we must consider non-contemporaneous effects. If we add another person, it may make everyone else less happy a hundred years from now.<br><br>In my own thinking, the buzzword \"sustainable\" is relevant here, defined as \"behaviors or states of being which may exist indefinitely without bringing about their own destruction.\" As a society, we have a tendency to overvalue any scale of happiness now at the cost of total happiness later. I think we should be creating long-term happiness solutions... compare the total happiness of a 10,000-year civilization with 5 billion people to a 200-year civilization of 15 billion with total happiness per annum that peaks higher.<br><br>I think the eventual logical conclusion is that we should find a way to maximize annual total happiness, with the constraint that we should make it last (proverbially) forever.<br><br>Alternately, we can create a society with tremendous fluctuations, where we have 10 billion happy people for 500 years, followed by 10 thousand miserable people living in the slowly-recovering shithole left by the multitudes for 40,000 years while the earth recovers, before we cycle again.<br><br>The question that is not addressed here is happiness justice... but I don't think this is meant to be a discussion of applied ethics, so I'll leave that as a side note.<br><br>2. Even without the consideration of time-delayed effects, I can imagine a model treating sadness as negative and happiness as positive in which, even in a population with net positive happiness, adding more people will negatively affect total happiness. I.E. The added happiness of one extra person is greater that the lost happiness by the group due to that person's existence.<br><br>3. There are multiple type modes of happiness that behave differently over time. We are talking about simple fulfillment... pretty straightforward. But what about hope? If a person's life is unfulfilling or contains great suffering, but that person is an optimist, that can have weird time effects. It might lead to happiness for the first 70 years, followed by a final 10 years of depression, which would be net positive if the happiness and the depression we equal in annual magnitude. Perhaps the depression is more severe than the happiness, and also contagious, yielding a net negative happiness effect on the population due to the person's existence.<br><br>This may seem a little unnecessary in a discussion which is perhaps intentionally a drastic simplication of a massively complicated systems-thinking question. However, it is relevant to inform what kind of society yields greatest total happiness. It may be that, due to hope effects, a society with tremendous income disparity has greater total happiness than an egalitarian society. (This, in turn, would depend on numerous other factors in each of the societies compared.)", "timestamp": "1314985427"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://plus.google.com/103013777355236494008", "anchor": "gp-1314986482562", "service": "gp", "text": "@David&nbsp;Chudzicki\n \n<br>\n<br>\nzero point: I do think asking people questions can get at their happiness level, though there is noise.  Answers to \"are you happy\" and similar questions correlates well with other indicators like physical and emotional state.  I think we can start to approximate the zero point by looking at which people want to kill themselves.  It would be helpful (for this purpose of finding the zero) if there were not societal taboos against suicide and people were not encouraged to think of how sad everyone else would be if they were to die.\n<br>\n<br>\nsinger: I don't dispute (1).\n<br>\n<br>\nAs for (2), I'm not sure whether non-human pain is bad.  I think the boundary might be sentience?  If it is bad, I think it's probably much less bad than human pain.", "timestamp": 1314986482}, {"author": "Chris", "source_link": "https://plus.google.com/117346402173047680184", "anchor": "gp-1314986529473", "service": "gp", "text": "This paragraph is counter-factual: If by average you mean the mean, maximizing the total and maximizing the mean is the same. Well, unless you're allowed to kill people to remove them from the equation. In that case, logic says you always want to kill the sad people, and if you're trying to maximize the average, you want to kill all but the happiest.\n<br>\n<br>\nThis is a big part of why I think it's important to not just add up people's happiness to figure out the evaluation function. I don't know what the right way to figure out the evaluation function is. I don't think it's a simple function at all.\n<br>\n<br>\nI think I think about making the universe a better place, but on a day to day basis I also have to make decisions to make myself happy, while still making other people happy too. As a human being, I have limited processing power so cannot figure out exactly what effect my actions will have.\n<br>\n<br>\nThus I form rules to live by. My biggest rule is to not hurt other people. Another is not to lie. I'm learning to make another rule to take care of myself and another is to communicate my emotions whenever possible. Sometimes these rules bump up against one another. When that happens, it's worth spending the extra processing power to consider which way to go about things.", "timestamp": 1314986529}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/281953478485958?comment_id=282104725137500", "anchor": "fb-282104725137500", "service": "fb", "text": "Kevin: lots there.<br><br>(1) I agree that time matters, and that we're trying to maximize total happiness over time.  I'm not sure whether a sustainability approach (as many happy people on this earth with current technology as we can) or a growth approach (new tech, off-earth colonies, potential for *way* more happy people but also potential for catastrophic dieback or self destruction) makes more sense.<br><br>(2) definitely.<br><br>(3) I would think of hope as positive and dashed hope as negative, and try to do research to figure out how strong they are", "timestamp": "1314986866"}, {"author": "Chris", "source_link": "https://plus.google.com/117346402173047680184", "anchor": "gp-1314986874477", "service": "gp", "text": "Okay, I should have read the whole post before posting myself.  I also think it's important to remember network effects.  If you killed all the sad people, that would almost certainly have an effect of saddening other people.\n<br>\n<br>\nAlso, is freedom more important than happiness?  Is intelligence more important than happiness?  Competence?  Contentment?  Self knowledge?  Happiness today or happiness tomorrow?  Even the evaluation function for a single person is super complicated.", "timestamp": 1314986874}, {"author": "Chris", "source_link": "https://plus.google.com/117346402173047680184", "anchor": "gp-1314988031383", "service": "gp", "text": "I just noticed.  We're coming at this from a very interesting point of view.  We're kinda doing things backward from the logical way.  Instead of picking a basis that we believe in and finding out how it results and living that way, we're trying out different bases and seeing which ones lead to our intuitive beliefs.", "timestamp": 1314988031}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://plus.google.com/103013777355236494008", "anchor": "gp-1314988055156", "service": "gp", "text": "@Chris\n \" unless you're allowed to kill people to remove them from the equation\" um, sort of?  We have choices that have an effect on what the total number of people will be in the future.  As long as the total number is constant, though, you're right that mean and average maximize the same.", "timestamp": 1314988055}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://plus.google.com/103013777355236494008", "anchor": "gp-1314988482753", "service": "gp", "text": "@Chris\n \"we're trying out different bases and seeing which ones lead to our intuitive beliefs\".  If that was all we were doing, this would be useless.  But if I start with the beliefs that include \"hurting people is wrong\", \"lying is wrong\", \"causing suffering is wrong\", \"helping people is good\" there will be cases where they conflict (lying to save someone's life, say).  I notice that some form of total utilitarianism does a good job of matching up with my existing moral beliefs but being simpler and much more consistent, so I adopt it.  Before I adopted it I was a pacifist who didn't give much money away.  I now believe, for utilitarian reasons, that those positions were wrong.  Now I am only very much against war, not convinced that war is always wrong.  And I am convinced that I should be earning as much as I can so I can give away as much as I can to effective charities.  So this isn't just an exercise of choosing a base that matches intuitive beliefs.", "timestamp": 1314988482}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://plus.google.com/103013777355236494008", "anchor": "gp-1314988628988", "service": "gp", "text": "@Chris\n I think happiness includes freedom, etc.  When you don't have freedom, people are less happy.  People have at times thought they could increase happiness by decreasing freedom, but they turned out to be wrong.", "timestamp": 1314988628}, {"author": "Ben", "source_link": "https://www.facebook.com/jefftk/posts/281953478485958?comment_id=282339011780738", "anchor": "fb-282339011780738", "service": "fb", "text": "It makes me squirm a little to assume that happiness can be meaningfully aggregated, averaged, and compared across individuals (or across complicated hypothetical futures). This kind of large-scale utilitarianism confuses me for the following reasons:<br><br>1) Happiness comes in many diverse types (physical pleasure, emotional catharsis, intellectual amusement...) that I'm not sure can be collapsed into a single variable. Same goes for suffering. Forming a single \"happiness\" variable relies on the assumption that people can somehow rank possible experiences from best to worst in a consistent way, which I'm dubious that anyone can do.<br><br>2) Even if happiness could be collapsed into a single variable, by ranking possible experiences, how would you choose a zero point? In other words, where do you cross over from a life that increases total happiness to a life that decreases it? I suppose the zero would be the point on the continuum where people would place the experience \"never having lived at all.\" But I can't imagine a meaningful comparison between nonexistence and the alternatives.<br><br>3) Even if happiness COULD be collapsed into a single variable, and everyone COULD define a zero point, how could you aggregate across different individuals? So far, each person just has a list of preferred experiences, with \"nonexistence\" defining our zero point. To do the computations your comparisons require, we still have to define some kind of happiness unit, so we can aggregate across individuals. And how do you turn my ranked preferences into a happiness unit?<br><br>4) Even if you can assign happiness units to my ranked preferences, how will you convert between my happiness units and your happiness units? What's the SI unit?<br><br>5) Even if you COULD do all of the above, how would you compute the prospective happiness or suffering of nonexistent people? What units do we use for them?<br><br>I see utilitarianism as a mathematical model. It makes certain assumptions (experiences - including nonexistence - can be ranked by individuals and compared across individuals) and is useful in some contexts, but not in others.<br><br>For example, it's useful in the context of picking where my group of friends should go to dinner, but not as useful in determining whether I should have children. How will my increased stress and loss of free time compare with the joy of having a kid (question #1 above)? How can I anticipate my kid's enjoyment of life (#5)? How can I know whether my kid will even want to be alive (#2)? Assuming it will be overall negative for me to have a kid, how can I compare my potential loss of happiness to his gain of life (#3-4)?<br><br>I see where perhaps you can resolve issues #3-5 by making some reasonable assumptions about averages and such, but can you ever actually solve questions #2-3?<br><br>Sorry for phrasing so much of this as questions. Not asking you to answer them all, just the main question: How do you approach utilitarianism in a way that sidesteps these issues of rigor?", "timestamp": "1315022057"}, {"author": "Ben", "source_link": "https://www.facebook.com/jefftk/posts/281953478485958?comment_id=282343541780285", "anchor": "fb-282343541780285", "service": "fb", "text": "Sorry, hadn't read your interesting thread with David &amp; Chris from g+ until just now. Accidentally covered some of the same ground.<br><br>But I still question whether people can rationally rank all possible experiences - and then make a rational choice about nonexistence, too.", "timestamp": "1315022904"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/281953478485958?comment_id=283312138350092", "anchor": "fb-283312138350092", "service": "fb", "text": "@Ben:<br><br>Re 1: it's fine for happiness to have diverse types.  We can still add up its total effect by ranking preferences or experiences.  I right now I would prefer to dance than talk or cuddle, that means right now I expect to gain more happiness from dancing.  As for ranking experiences, while I don't always think dancing &gt; talking &gt; cuddling or anything, at a time there's always an expectation of happiness from each.<br><br>Re 2: '''I suppose the zero would be the point on the continuum where people would place the experience \"never having lived at all.\" ''' -- right.  That's the zero point for an individual.  Adding one happy person could still bring down total happiness, though, if that person's existence made resources more scarce enough to hurt others more.<br><br>Re 3: For aggregation, you start by assuming that when a given person says something makes them happy/sad etc that this is the same across all people.  Then you adjust based on culture and individual evidence to get a more accurate estimate.  So maybe I say everything makes me sad, but that's just because I am pretty willing to say so.  Then my calling something 'depressing' has less weight than someone from a culture where expressing negative thoughts is not ok calling the same thing 'depressing'.  <br><br>Re 4: People have suggested the terms 'utils' and 'hedons'.<br><br>Re 5: nonexistent people have happiness zero.  Future people have whatever happiness we expect them to have when they exist, which is a *very* bad estimate.<br><br>\"\"\"not as useful in determining whether I should have children. How will my increased stress and loss of free time compare with the joy of having a kid (question #1 above)? How can I anticipate my kid's enjoyment of life (#5)? How can I know whether my kid will even want to be alive (#2)? Assuming it will be overall negative for me to have a kid, how can I compare my potential loss of happiness to his gain of life (#3-4)?\"\"\"<br><br>For stress vs free time in happiness, the best way to estimate is to look at existing parents and find out how happy they are.  You're probably like them and will be about as happy as they have been.<br><br>If you don't think having a kid will make *you* happier than the alternative, then utilitarianism would probably say not to have a kid.  Because if your goal is to make *other* people (existing and future) happy, there are far cheaper ways to do it.", "timestamp": "1315183963"}, {"author": "Ben", "source_link": "https://www.facebook.com/jefftk/posts/281953478485958?comment_id=283417181672921", "anchor": "fb-283417181672921", "service": "fb", "text": "On #1: I agree that in any given moment, people can probably rank potential experiences from happiest to least happy (as you ranked dancing &gt; talking &gt; cuddling). And, as you said, these orderings will probably change over time (when you get sleepy, cuddling &gt; talking &gt; dancing). Then, to arrive at an \"expected utility\" function, you would need to average these rankings over time or possible futures, perhaps weighting them somehow. But I'm not convinced that these inconsistencies are \"noise\" that can be overcome by sophisticated averaging.<br><br>Take the question of whether having a child is a good idea. By my understanding, parents tend to report lower happiness (than before having kids) when simply asked about life. But if primed with a question about their kids (so that they have their children in mind when they answer how happy they are), they report higher happiness. I have no way of telling which is the \"true\" answer; it seems that people who have had kids reach no consistent, settled answer on whether the hardships outweigh the joys. It's not a question of noise that needs to be averaged away. It's that having kids truly makes you both happier and less happy, in a way that cannot be ranked or compared.<br><br>Getting to #2: This inconsistency of rankings means that the ordering might even change with regards to an individual's zero point - their ranking of the \"experience\" of nonexistence. So, at one moment, the life I've lived may seem worse than no life at all; the next day, better; the next, worse again. It becomes very hard to average this across time or across possible futures (what weight do you assign to each ranking?), and therefore almost impossibly hard to anticipate whether adding a marginal person to the population will increase total happiness.<br><br>On #3: I didn't communicate well what I meant by aggregation. What I meant was: if we do something that makes me happier, and makes you less happy, how do we compare the magnitudes of those effects? In some cases, it is fairly obvious (my happiness from using you as gladiatorial entertainment is probably outweighed by your unhappiness at dying).<br><br>But in many situations, this comparison very non-obvious. For example, if I am on the verge of starvation, and you are very hungry but not as hungry as I am, then does it increase total happiness for me to steal bread from you? I get a huge increase from getting food; you get a decrease from losing food, and from being robbed. But how can we compare these quantities? I can state my preferences; you can state yours; but so far as I know, there is no way to count up the utils I get, and compare them to the utils you get.", "timestamp": "1315203184"}, {"author": "Julie", "source_link": "https://www.facebook.com/jefftk/posts/281953478485958?comment_id=283498514998121", "anchor": "fb-283498514998121", "service": "fb", "text": "&lt;---very, very happy parent.  (-:", "timestamp": "1315221690"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/281953478485958?comment_id=283518521662787", "anchor": "fb-283518521662787", "service": "fb", "text": "@Julie: but you were just primed by mention of your kids ;)", "timestamp": "1315225278"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/281953478485958?comment_id=283541838327122", "anchor": "fb-283541838327122", "service": "fb", "text": "@Ben: \"having kids truly makes you both happier and less happy, in a way that cannot be ranked or compared.\"<br><br>There are a lot of things that could be affecting how happy parents report themselves as being.  For example, parents could be afraid of being thought poor parents if they said they were not happy once the researcher indicated they were paying attention to children in some way.  Perhaps the mention of kids put people into a more long term mode, where instead of reporting on their current happiness they were reflecting on an overall life satisfaction?  Or maybe they have such strong happy feelings for their kids that they just feel a huge increase in happiness when their kids are referenced.  These are testable.  For example, we could check if you would see the same thing when priming on other things that were very important and very positive to the person you were asking. <br><br>I see this as evidence that measuring happiness is difficult, not that there isn't something there to be measured.<br><br>\"\"\"ordering might even change with regards to an individual's zero point - their ranking of the \"experience\" of nonexistence. So, at one moment, the life I've lived may seem worse than no life at all; the next day, better; the next, worse again.\"\"\"<br><br>One way to average could be to ask people what they thought of the last decade, and take advantage of their body's internal averaging.<br><br>\"\"\"if I am on the verge of starvation, and you are very hungry but not as hungry as I am, then does it increase total happiness for me to steal bread from you? I get a huge increase from getting food; you get a decrease from losing food, and from being robbed. But how can we compare these quantities? I can state my preferences; you can state yours; but so far as I know, there is no way to count up the utils I get, and compare them to the utils you get.\"\"\"<br><br>I mostly agree.  We can't solve this problem empirically because the measurement is too hard.  If we could get more data, perhaps by taking two similar groups and doing a better job of prohibiting theft in one group, measurement might be practical.  My guess is that the stealing decreases total happiness because while the bread sustains you for an additional day the effects of the theft are large: I avoid people who might steal my bread, I tell my friends, richer people become more distrusting of poorer people and less likely to support programs designed to help them.  Or maybe not: perhaps in our society your action is understood as necessary and I don't feel any long term pain over it.  Perhaps I even feel morally righteous for contributing to you (even though it was not voluntary).  This makes guessing at utilities difficult without better evidence.<br><br>\"\"\"almost impossibly hard to anticipate whether adding a marginal person to the population will increase total happiness\"\"\"<br><br>Yes.  Very hard.  Perhaps your kid would grow up to invent a major new energy source.  Or perhaps they would die at age 10 after you've spent a lot of money on them but before they've done much to improve other people's happiness.  So I think your \"not as useful in determining whether I should have children\" from before is right, actually.  The utilities involved are large, confusing, and difficult to estimate well.  I think if I had a strong utilitarian argument either way I would need to follow it, but I currently don't.", "timestamp": "1315228809"}, {"author": "Ben", "source_link": "https://www.facebook.com/jefftk/posts/281953478485958?comment_id=283638278317478", "anchor": "fb-283638278317478", "service": "fb", "text": "@Julie: Glad to hear it! I plan on having kids, too, a few years down the road.<br><br>@Jeff: \"Perhaps the mention of kids put people into a more long term mode, where instead of reporting on their current happiness they were reflecting on an overall life satisfaction?\" That's one of the explanations that I've heard, and not too far off from the one that I tend to believe.<br><br>So I think I see where our views on this diverge. You see inconsistent rankings of experiences as noise in the system; I see the human brain as a system that is not necessarily capable of providing consistent or meaningful rankings. It seems to me that the human brain has evolved an impressive ability to be rational over the last 2 million years; but I don't think that includes the ability to consistently rank possible experiences.<br><br>Of course, in lots of situations, the rankings are easy. For me, having 2-3 kids &gt; having 9 kids, or having 0 kids.<br><br>But is having 9 kids &gt; having 0 kids? Even if I could visualize exactly what my 9-kid life and my childless life would be like, I don't think I could compare them. They would be very different, but not necessarily in a way that I could ever rank. In my view, it's not a question of noise; it's that my brain is fundamentally ill-equipped to determine a preference between these two potential lives.", "timestamp": "1315240087"}, {"author": "George", "source_link": "https://www.facebook.com/jefftk/posts/281953478485958?comment_id=285960304751942", "anchor": "fb-285960304751942", "service": "fb", "text": "I would just like to add that I agree with Ben's criticisms by-and-large and find Jeff's rebuttals unconvincing, to say the least. I find very little compelling about utilitarianism. The only times I accept vaguely utilitarian reasoning is when discussing government policies in certain specific circumstances.", "timestamp": "1315607730"}]}