::  Posts  ::  RSS  ::  ◂◂RSS  ::  Contact

Many-worlds implies the future matters more

July 26th, 2012

Yesterday I wrote about a non-consequence of the many-worlds interpretation (MWI) of quantum mechanics [1], but today I have a consequence: if you believe the MWI you should care about the future a lot more than the present.

Update 2012-07-26: Mitchell Porter explains why I'm wrong and what I was missing in a comment.

Imagine you're considering whether to take a break and eat some chocolate in an hour or in two. You'll get similar enjoyment out of both choices, so you might think it doesn't matter. But if every quantum event between one and two hours from now will branch the universe, and there are lots of such events, in two hours there would be hugely many more yous to experience your chocolate break than in only one hour. The MWI implies we should be willing to make substantial sacrifices in terms of current happiness for the benefit of our future selves. In other words, your preference for investing probably isn't strong enough.

In trying to apply this to altruism you do need to be careful. Some charities are more like spending, in that their benefits are mostly in the present, while others are like investing. If I donate to the Against Malaria Foundation to distribute mosquito nets, the main benefits are preventing current or near-future people from dying. There are probably some long term effects, like a stronger economy when you have fewer people sick, but they're not the goal or the main effect. On the other hand the Future of Humanity Institute, a charity trying to prevent existential risk, is much more like an investment in that nearly all its benefit (which is really hard to predict or quantify) goes to future people. Metacharities promoting effective altruism, like 80,000 hours, Giving What We Can, and GiveWell, are another sort of investment-like charity, influencing people's future giving. And then there's the option of straight up monetary investing now and donating later.

If you accept the MWI you should be evaluating your altruistic options primarily on their future effects, with more emphasis on farther-future ones.

[1] Which I still don't know enough about to have an opinion on the truth of.

Comment via: google plus, facebook, lesswrong

More Posts:

Older Post:

Newer Post:

  ::  Posts  ::  RSS  ::  ◂◂RSS  ::  Contact