{"items": [{"author": "Danner", "source_link": "https://www.facebook.com/jefftk/posts/417200831655272?comment_id=417207358321286", "anchor": "fb-417207358321286", "service": "fb", "text": "This sounds like the shortcut of vanity metrics. The metric should be the improvement of humanity, not individual pleasure - I know, I'm a monster.", "timestamp": "1340978696"}, {"author": "Alexander", "source_link": "https://plus.google.com/106808239073321070854", "anchor": "gp-1340980146461", "service": "gp", "text": "If the extrapolated volition of humanity was to converge to wireheading then it could as well mean that killing all humans and turning the universe into some sort of\u00a0Nozick\u00a0machine would be the right thing to do.\u00a0\n<br>\n<br>\nBut given expected utility maximization I believe that the most likely outcome will be that all resources are going to be devoted to\u00a0prevent the heat death of the universe by trying to hack the matrix or attempt time travel etc.", "timestamp": 1340980146}, {"author": "Alex", "source_link": "https://plus.google.com/100936518160252317727", "anchor": "gp-1340980843226", "service": "gp", "text": "This is precisely why I find utilitarianism to be a useless tool for making many types of moral decisions. Who are we to say that happiness is a metric that can be accurately measured, quantified, and compared, or that a universe with a larger overall happiness * person product is a superior one to a universe with a smaller one, or that it is our moral duty to maximize happiness, or that happiness is even the right metric (or preference-satisfaction, or whatever)? If we follow that path, we inevitably tend toward preposterous conclusions like \"wireheading everyone is our moral obligation\".\n<br>\n<br>\nThe gap between any metric we could conceivably measure and compare and use as a proxy for moral good, and actual moral good (whatever that means), is a chasm that utilitarianism will inevitably fall into.", "timestamp": 1340980843}, {"author": "Alexander", "source_link": "https://plus.google.com/106808239073321070854", "anchor": "gp-1340981321713", "service": "gp", "text": "@Alex\n\u00a0What reason do we have to suspect that our intuition is better at judging what we \nshould\n do?\n<br>\n<br>\nCalling a conclusion like \n\"wireheading everyone is our moral obligation\"\n preposterous might be similar to calling quantum mechanics \"weird\".", "timestamp": 1340981321}, {"author": "Alex", "source_link": "https://plus.google.com/100936518160252317727", "anchor": "gp-1340983239908", "service": "gp", "text": "@Alexander\n: Quantum mechanics \nis\n weird. That doesn't make it any less valid a model, especially when it happens to have turned out to be such a good one. In the absence of other evidence, though, I'm going to trust a moral system that comes to reasonable conclusions most of the time over one that comes to unreasonable ones.\n<br>\n<br>\nI don't know if simple intuition is better or worse than utilitarianism in this respect. But I do know that if we are going to make sweeping moral judgments like this, we need a more sophisticated model of human behavior than any we currently have. I believe that such a model would reveal that \"happiness\" (a) is not a good metric for evaluating good, and (b) is not even quantifiable or comparable in any way that would make it useful as such a metric. Furthermore I suspect that there are not many better metrics, or alternatively, that any metric that is better is even harder to quantify.", "timestamp": 1340983239}, {"author": "Alexander", "source_link": "https://plus.google.com/106808239073321070854", "anchor": "gp-1340983910165", "service": "gp", "text": "But what methods do you use to discern reasonable from unreasonable conclusions? Your gut feelings?\n<br>\n<br>\nCorrect me if I misinterpret you. But you seem to be suggesting that you favor exploration over exploitation. In other words, trying to figure out what humans actually want.\n<br>\n<br>\nDoes that mean that the only moral action right now is to either work directly to dissolve human nature or contribute money to that undertaking?\u00a0\n<br>\n<br>\nIf not. On what grounds do you justify doing something else if you believe that our models are inferior to our intuition, which was never honed to make large scale and long term moral judgements?", "timestamp": 1340983910}, {"author": "Ben", "source_link": "https://www.facebook.com/jefftk/posts/417200831655272?comment_id=417248804983808", "anchor": "fb-417248804983808", "service": "fb", "text": "If I may continue to bludgeon a dead horse, this is another dilemma solved if you're willing to accept individual rights.<br><br>If people have the fundamental right to seek out \"joy\" or \"fulfillment\" (as distinct from \"pleasure\"), then they have the right to refuse the pleasure-wiring, and as such, it's wrong for a government to force it upon them.", "timestamp": "1340985332"}, {"author": "Alex", "source_link": "https://plus.google.com/100936518160252317727", "anchor": "gp-1340985383590", "service": "gp", "text": "@Alexander\n, I think we have lots of tools for deciding what is moral and what is not, and we should use the best tool that we have at any given time. Improving our tools is an ongoing process -- but we don't need to drop everything right now to build the best tool we can. We do, after all, have other obligations in the meantime.\n<br>\n<br>\nNote that I didn't say that utilitarianism is useless in \nall\n situations -- it's just another tool. If I want to carefully weigh which charity I want to donate to, for example, I might enumerate some of my goals and decide which allocation of my limited resources best satisfies those goals. I just think there are lots of situations where an attempt to apply utilitarianism gives unsatisfactory results, and this is one example.", "timestamp": 1340985383}, {"author": "Alexander", "source_link": "https://plus.google.com/106808239073321070854", "anchor": "gp-1340986221943", "service": "gp", "text": "@Alex\n\u00a0We might have a lot of tools. But which one should we use? Even rational people vary dramatically when it comes to what is moral and what is not.\n<br>\n<br>\nTake for example GiveWell, the Singularity Institute and mathematician John Baez. They all know each others tools, are all highly rational and know each others arguments. Yet all disagree considerably about what to do.\n<br>\n<br>\nI do not believe that what GiveWell tells me to do, i.e.\u00a0contributing\u00a0money to fight malaria is a moral obligation or the best you can do. I also don't think, as John Baez does, that climate change is the most worrisome problem humanity faces. And I believe that working to make artificial general intelligence safe to humans will increase rather than decrease the chance of a horrible outcome.\n<br>\n<br>\nSo who is right and why? I don't see that the\u00a0problem\u00a0is tractable at all.\n<br>\n<br>\nBut I am curious how you do it, that's why I am asking.", "timestamp": 1340986221}, {"author": "Alexander", "source_link": "https://plus.google.com/106808239073321070854", "anchor": "gp-1340986375418", "service": "gp", "text": "Addendum: Wireheading and antinatalism seem equally valid to me.\u00a0", "timestamp": 1340986375}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/417200831655272?comment_id=417267201648635", "anchor": "fb-417267201648635", "service": "fb", "text": "@Danner: what is \"improvement of humanity\"?", "timestamp": "1340988204"}, {"author": "Alex", "source_link": "https://plus.google.com/100936518160252317727", "anchor": "gp-1340988475145", "service": "gp", "text": "@Alexander\n:\u00a0we\u00a0each make our own moral judgments, including whether or not to criticize others for their own decisions. Two people having the same set of tools and coming to different conclusions is pretty compelling evidence for moral relativism. I don't think that the question of \"who is right\" is intractable -- but the answer depends on who is asking, their social context, and lots of other things. It seems silly to ask how well an act satisfies some particular metric of \"good\" \nin every situation\n, since that metric might not be the best one for someone else, or for a different situation.\n<br>\n<br>\nI personally use one set of tools in a particular situation; someone else might use different tools, or use different metrics. My decision depends on my social context, my intuitions, my mental state at that moment, my knowledge of other similar situations and outcomes, prejudices, etc. To generalize on that would require detailed understanding of what my mind is doing as I make a decision. Morality is messy.", "timestamp": 1340988475}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/417200831655272?comment_id=417270954981593", "anchor": "fb-417270954981593", "service": "fb", "text": "@Ben: it doesn't have to be \"pleasure\" as distinct from \"fulfillment\".  I suspect if we really learn to understand wireheading we'll be able to stimulate people such that they feel joy, fulfillment, pleasure, or any other feeling that one can get from real experiences.<br><br>As for adding rights, then you're in a world where you have to resolve conflicts between rights and well-being.  If a person is psychotic and refuses medication we are currently ok under some circumstances saying \"if you were in your right mind you would choose this medication, so we will force it on you\".  Under a framework of individual rights I think it this is wrong, yet if it makes the person happier and afterwards they are glad they had it, isn't it good?  And yet \"it makes the person happier and afterwards they are glad they had it\" would apply to wireheading too.", "timestamp": "1340988741"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://plus.google.com/103013777355236494008", "anchor": "gp-1340989230761", "service": "gp", "text": "@Alex\n\u00a0\"\u00a0Two people having the same set of tools and coming to different conclusions is pretty compelling evidence for moral relativism.\"\n<br>\n<br>\nAnother\u00a0possibility\u00a0is that one or both are making a mistake.", "timestamp": 1340989230}, {"author": "Alexander", "source_link": "https://plus.google.com/106808239073321070854", "anchor": "gp-1340989379513", "service": "gp", "text": "@Alex\n\u00a0If morality is subjective then objectively each represented moral position can only have same weight. As deciding which position shall be assigned more weight than some other position would demand objective grounds on which to decide what is right. Which would contradict.\u00a0\n<br>\n<br>\nBut this also means that an\u00a0equilibrium\u00a0of all\u00a0represented positions does constitute an objective foundation for what actions are less wrong than others, for decision theoretic reasons.\n<br>\n<br>\nBut that doesn't really help at all, apart from possibly being philosophically satisfactory, because it is completely intractable and therefore can't be used to make actual decisions.", "timestamp": 1340989379}, {"author": "Alex", "source_link": "https://plus.google.com/100936518160252317727", "anchor": "gp-1340989682926", "service": "gp", "text": "@Jeff&nbsp;Kaufman\n\u00a0but who are we to say? Like, how would we even say that any system we came up with were a better way to judge either conclusion?", "timestamp": 1340989682}, {"author": "Alexander", "source_link": "https://plus.google.com/106808239073321070854", "anchor": "gp-1340989993403", "service": "gp", "text": "@Alex\n\u00a0I don't think it is possible to decide.\n<br>\n<br>\nSome people claim that an empirical approach would be favorable. Where we learn and adapt. But since our values are not stable all we will end up with is an implementation of our methods rather than a\u00a0methodical implementation of our values:\u00a0\nhttp://kruel.co/2011/07/22/objections-to-coherent-extrapolated-volition/", "timestamp": 1340989993}, {"author": "Alexander", "source_link": "https://plus.google.com/106808239073321070854", "anchor": "gp-1340990436396", "service": "gp", "text": "Take for example expected utility maximization.\u00a0\n<br>\n<br>\nIf you calculate the expected utility of various outcomes you imagine impossible alternative actions. The alternatives are impossible because you already precommited to choosing the outcome with the largest expected utility.\u00a0\n<br>\n<br>\nYou swap your complex values for a certain terminal goal with the highest expected utility, indeed your instrumental and terminal goals converge to become the expected utility formula.\n<br>\n<br>\nIn other words, expected utility maximization seems to be a tool that helps you to realize your values but in the end alters your behavior in such a way that best reflects the\u00a0methodology rather than your\u00a0initial values.\u00a0\u00a0\u00a0", "timestamp": 1340990436}, {"author": "Alexander", "source_link": "https://plus.google.com/106808239073321070854", "anchor": "gp-1340990615980", "service": "gp", "text": "In other words, if morality is value based and values are not static then morality changes as your values change. Which makes any decision time-inconsistent because initial actions cease to be moral in the long-term.", "timestamp": 1340990615}, {"author": "Alex", "source_link": "https://plus.google.com/100936518160252317727", "anchor": "gp-1340990969836", "service": "gp", "text": "@Alexander\n\u00a0I agree that any methodology for \"discovering\" morality itself influences the act of being moral, and also that morality changes over time, which is why I think it's important to understand the psychological basis for morality -- that is, asking how individuals decide what is moral, rather than merely asking, \"empirically, what is moral?\"", "timestamp": 1340990969}, {"author": "Alexander", "source_link": "https://plus.google.com/106808239073321070854", "anchor": "gp-1340991421056", "service": "gp", "text": "@Alex\n\u00a0But if there was a way to determine how individuals decide what is moral, then wouldn't that also answer the question what is\u00a0empirically\u00a0moral?\n<br>\n<br>\nAnd if you believe that to be possible, then shouldn't it be your moral obligation to figure out what is moral to subsequently implement it?\n<br>\n<br>\nAnd if you figured out a decision procedure that reflects how humans decide what is moral. Then in what sense would it be the \nright\n thing to do?\n<br>\n<br>\nJust because evolution implemented such a procedure does not mean that we wouldn't be better off following a different procedure.", "timestamp": 1340991421}, {"author": "Alexander", "source_link": "https://plus.google.com/106808239073321070854", "anchor": "gp-1340991572225", "service": "gp", "text": "I should elaborate on my last comment.\n<br>\n<br>\nWhat I meant is that if we figured out an algorithm that could judge what is moral that does not mean that it wouldn't contradict by judging actions to be moral that lead up to immoral world states.", "timestamp": 1340991572}, {"author": "Alexander", "source_link": "https://plus.google.com/106808239073321070854", "anchor": "gp-1340992248986", "service": "gp", "text": "What I mean is that, from a\u00a0intuitive\u00a0human moral perspective, consequentialism is\u00a0contradictory. Means do not justify ends given the moral intuition of many people. Yet this does contradict the judgement of the results, where morally superior world states are the result of immoral actions.\n<br>\n<br>\nAnd the father you go down the road of logical implications the more you keep contradicting yourself.\u00a0\n<br>\n<br>\nIn the end, is there a \nright\n thing to do? I don't see how there could be.", "timestamp": 1340992248}, {"author": "Danner", "source_link": "https://www.facebook.com/jefftk/posts/417200831655272?comment_id=417310574977631", "anchor": "fb-417310574977631", "service": "fb", "text": "I left the 'improvement of humanity' intentionally vague, but yes, the issue does revolve inside that statement. have you read walden 2? it deals with many of these issues (clockwork orange does a bit as well, hell, Odysseus and the land of the lotus eaters deals with it too) I think we need a long enough view that we can live in a sustainable way into the far-future. Oh course, then 'we' becomes an issue - I won't exactly say that humans need to survive, but i'd want conscious and insightful thought to continue, until all the stars burn out. The zeroth law of robotics should apply to humans as well.<br><br>I'm not doing very well talking about this stuff on the computer, but I'd love to chat about it in person some time.", "timestamp": "1340994230"}, {"author": "B", "source_link": "https://www.facebook.com/jefftk/posts/417200831655272?comment_id=417351904973498", "anchor": "fb-417351904973498", "service": "fb", "text": "Yep, there are conflicts between rights, well-being, and happiness.  There are also conflicts between one individual's rights, well-being, and happiness, and another's, in many circumstances.  Nothing says that the ultimate good has to be at one extreme of any of them, even if you invent an equation that says it ought to be.  Yes, that means there are going to be some messy borderlines somewhere, and it probably means that you can't actually come up with a useful equation.", "timestamp": "1340999868"}, {"author": "Ben", "source_link": "https://www.facebook.com/jefftk/posts/417200831655272?comment_id=417364741638881", "anchor": "fb-417364741638881", "service": "fb", "text": "I'm with Daniel - I see ours as a world of messy tradeoffs and frequent gray areas, rather than one where the best course of action is always calculable if you can look past the surface noise.<br><br>@Jeff: That makes sense. I see better now why you're wary of a rights-based framework. Focusing only on consequences is certainly cleaner.", "timestamp": "1341001446"}, {"author": "Julian", "source_link": "https://www.facebook.com/jefftk/posts/417200831655272?comment_id=417425234966165", "anchor": "fb-417425234966165", "service": "fb", "text": "My problem with such schemes has always been that they wouldn't be safe. If you've got a wire in your head providing \u201cintense undiminishing pleasure,\u201d then you're going to stop engaging with the world and leave yourself vulnerable. Inevitably you're either going too die in a way that would've been preventable if you had been engaged in the world (starving to death because you couldn't be bothered to go get food) or your wire system is going to break down and you'll be unable to repair it (because you're physically emaciated from a lack of exercise or you've never learned how because you spent all your time wireheading). It's not a resilient set up \u2013 when a world where everyone has pleasure supplied directly to their brains breaks, it's going to break hard and be very difficult to fix.<br><br>On the other hand, if you could wirehead and still engage with the world, I honestly don't think I'd have a problem with it. If a system could provide a constant low level positive affect instead of intense pleasure, that would be both a drastic increase in your well being and resilient (because if it broke you could keep acting like you did before, just a little less pleasurable).", "timestamp": "1341010292"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/417200831655272?comment_id=417487214959967", "anchor": "fb-417487214959967", "service": "fb", "text": "@Julian: one way to make it all less fragile is to use age grading.  Say everybody grows up taking care of their plugged-in ancestors, with a strong culture of filial piety and respect for elders.  This is reenforced by knowing that your children see how you treat your ancestors and it's what they see you doing that they will apply to their eventual care of you.  You spend years dealing with food, maintaining systems, raising kids, and only when they reach a certain age or have saved enough do you plug in for the rest of your life. If things go wrong the wireheaded people are not resilient but there's also a good sized group of hard working never-wireheaded people who really want to put things back together so they can have this really good retirement.<br><br>Separately, a system providing constant low-level positive affect is currently an experimental treatment for depression: http://en.wikipedia.org/wiki/Deep_Brain_Stimulation", "timestamp": "1341019590"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/417200831655272?comment_id=417531544955534", "anchor": "fb-417531544955534", "service": "fb", "text": "@Ben: \"I see better now why you're wary of a rights-based framework. Focusing only on consequences is certainly cleaner.\"<br><br>There are also intermediate positions.  At one end you have really simple hedonic utilitarianism: add up how happy everyone is, and that tells you how good the world is.  For choices compare the future worlds produced on both sides of the choice, and take the better one.  But you can say \"what I value is more complex than just happiness\" [1] or even \"adding up some value across all the people doesn't capture something important\" [2] but still pay attention only to outcomes and not rights.<br><br>I also see the world as being full of grey areas, but I think of them as being in the minds of people.  The grey areas come from uncertainty: \"what would happen if I did this?\"  \"What would its effects be on people?\"  \"How good would those effects be?\"  And, as in this post, \"what do I value?\"  I don't believe the best course of action is usually something I can calculate, but only because getting the information isn't practical.<br><br>[1] I didn't believe this a week ago, but am unsure now.<br><br>[2] I still don't believe this.", "timestamp": "1341026264"}, {"author": "Kiran", "source_link": "https://www.facebook.com/jefftk/posts/417200831655272?comment_id=417588548283167", "anchor": "fb-417588548283167", "service": "fb", "text": "One question I think you need to answer is: why do you think joy caused by some particular thing is not just less, but \"much less\" valuable than joy caused by some other thing?<br><br>If I understand the theory of utilitarianism, utilitarians believe in maximizing overall happiness (presumably of people currently alive.)  That seems to make wireheading the most reasonable course of action.", "timestamp": "1341036810"}, {"author": "Jeff&nbsp;Kaufman", "source_link": "https://www.facebook.com/jefftk/posts/417200831655272?comment_id=417704831604872", "anchor": "fb-417704831604872", "service": "fb", "text": "@Kiran: not all utilitarians believe the thing to be maximized is the happiness of current people.  Many (including me) think future people count just as much as current people.  Many (not including me) think you should include the happiness of animals.  Others (preference utilitarians) think that instead of maximizing happiness you should maximize preference satisfaction.  Or something more complex than either.  But at some point you get something you're maximizing, and you call that \"utility\".  Then there's how to combine over people: do you take total utility or average utility?  Or do you make the difference go away by just maximizing utility over everyone who currently exists (prior-existence)?<br><br>If wireheading counts as happiness, and you're a happiness maximizing utilitarian, then yes, this would make wireheading probably the most reasonable course of action.", "timestamp": "1341062168"}]}