::  Posts  ::  RSS  ::  ◂◂RSS  ::  Contact

Transcript: making sense of long-term indirect effects

January 17th, 2017
giving, transcript

The following is a transcript of Rob Wiblin's August 2016 EA Global talk, "Making sense of long-term indirect effects". The video is here and audio is here. This transcription is just from the audio.

This is a better turnout than I was expecting. There's no way I'd be up at this morning at 9am to go to a talk after giving a party last night, unless I was the one giving the talk. So if you're just joining us on YouTube because you decided to sleep in, you have my greatest sympathy, I totally would have done the same thing.

So, as Roxanne said, this talk is about flow-through effects: the effect of different actions on the very long term. But I should give you a health warning first: this is the talk about flow-through effects your mother warned you about. This is not going to be super inspiring, necessarily; it can be a little bit demoralizing to think about how hard it can be to affect the long term. I also have more questions, really, than answers here: I don't have a simple thesis and am just going to be pushing on you. I'm going to be describing some of the issues that exist here, some of the questions that are still open, so it's not going to have a simple ending necessarily.

I want to start with a story, in this case a Chinese fable:

A man who lived on the northern frontier of China was skilled in interpreting events. One day for no reason, his horse ran away to the nomads across the border. Everyone tried to console him, but his father said, "What makes you so sure this isn't a blessing?" Some months later his horse returned, bringing a splendid nomad stallion. Everyone congratulated him, but his father said, "What makes you so sure this isn't a disaster?" Their household was richer by a fine horse, which the son loved to ride. One day he fell and broke his hip. Everyone tried to console him, but his father said, "What makes you so sure this isn't a blessing?" A year later the nomads came in force across the border, and every ablerbodied man took his bow and went into battle. The Chinese frontiersmen lost nine of every ten men. Only because the son was lame did father and son survive to take care of each other. Truly, blessing turns to disaster, and disaster to blessing: the changes have no end, nor can the mystery be fathomed.

This tale outlines an ancient piece of wisdom: It can be quite hard to forecast what effects our actions are going to have, and things that initially seem bad can end up being good in the long run, and we probably all experience this in our own life as well. But I think this isn't a reason to not bother trying to predict the long-term effect of our actions, because if we can't predict what effect our actions are going to have, even on a balance of probabilities standard, then they're probably not very valuable to do in the first place.

So first I just want to define some terms here, because there's a lot of different words we use to describe "flowthrough effects", and that was the initial name for this talk, but Toby Ord convinced me we should do a bit of rebranding here: get rid of the term "flowthrough effects", which is a bit unnecessarily vague, and start talking about "indirect effects", which I'm going to define as "effects that affect someone other than the person who you initially intended to target", and "long term effects", which are effects that occur after the present generation is dead, at least assuming we have normal human lifespans. So, in the long term, all effects are going to be "indirect".

So, I'll just describe some of the hypotheses that are relevant to whether flowthrough effects matter very much. The first one is the "astronomical stakes" idea, which Bostrom came up with. He gave it that name, and in fact I'm stealing a lot of ideas from Bostrom in this talk. The idea here is, what matters most of all is what happens to the vast amount of energy in the universe. Currently there's an enormous number of of stars out there and an enormous amount of matter and energy, but as far as we can tell it's basically producing something like zero value. It's just hydrogen sitting there in deep space, or suns burning up, it's not something we would regard as particularly valuable or particularly harmful. But, if we organized it in the right way, then it could be extremely valuable, or extremely harmful. Yeah: so if we organized the universe in the right way it could be incredibly valuable, or there could be a lot of bad stuff in it. And the scale of the value that the whole rest of the universe produces is trillions of trillions of times larger than what we could do just on earth with our current technology.

This seems to me to be a pretty likely hypothesis. It's obvious that this is a pretty compelling idea if you're a utilitarian, but even if you place some probability on consequences being something that really matter, and creating new positive things being a valuable thing to do, then because the scale of the potential benefit is so enormous, trillions of times larger than the things we can accomplish on earth, then on expected value terms the astronomical stakes stuff is going to dominate your calculation of what's most important.

This pretty naturally leads on to the long-term effects hypothesis, which is that the majority of the value of our actions is from its effects on people who don't yet exist. I think if you buy the astronomical stakes argument then you probably have to buy this one as well. And how can we affect what things happen in a hundred years time, or a thousand years time, or a million years time: we have to do it indirectly, through an indirect series of cause and effect, one person affecting the next generation affecting the next generation and so on. So it has to be indirect.

But I think even if you don't buy the astronomical stakes hypothesis, even if you think there are more than just ten generations of people yet to come, if humans are just going to keep existing on the earth for another five hundred years or so, then I think the effect on future generations, generations after the present one, are likely to dominate the moral effect of your various actions. You have to trade off that your effects on future generations, generations after the present one, are more uncertain, but there's also going to be a whole lot more of them. I'm just ballparking it here, but I think if there's more than ten yet to come then you have to say that the long term effects of your actions are more important.

So, Bostrom thinking about this was wondering: if the long term is really what matters what should we be looking to do? His first suggestion, from a paper in 2003, was to minimize the risks that humanity faces. A good signpost for one day achieving astronomical gains would be to achieve an "ok" outcome today. The logic here is that so long as we don't permanently ruin things, going extinct, having a long-term dictatorship from which we can never escape, then humanity survives and we still have a scientific method and we still have our brains, and we can live to improve the world another day. We can correct our mistakes and do better. This sometimes appears with the name "maxipok": maximizing the probability of an "ok" outcome, what we should be aspiring to do in the short term.

How might we go about doing this? Approach one, which I think is the one that most effective altruists are implicitly taking, is they're buying the "better position" hypothesis, which is that faster human empowerment, reducing poverty, having better economic growth, improving people's health, improving their education and their understanding of the world, that this kind of thing reliably makes the future more promising. Basically, because they put us in a better position to deal with future challenges. Whatever the threats are to humanity's success in the longer term, if we have a lot of wealth, and we're healthy, and we're better educated, then we'll be in a good position to deal with those problems.

And I think this makes complete sense if you think that the main conflict is between people and nature, the classic story archetype. Because empowerment helps to deal, clearly, with natural disasters, like supervolcanoes maybe, asteroids, diseases, pandemics, if we're smarter we can come up with vaccines more quickly and we can prevent them from spreading. Wild animal suffering is another thing that we could deal with if humans we're better empowered, and so on and so on. So if we're in a people versus nature world I think this theory is very compelling.

But what if this is not the kind of narrative that's going on in the universe? What if we've seen the enemy and it is us? What if we live in a person versus person conflict story? In that case the better position hypothesis is whole lot less clear. Because education, say, puts us in a better position to both solve and create problems more quickly. It's empowering both good and bad things humanity can do. So, for example, if we're better educated and have a better economy, we will invent nuclear weapons sooner perhaps, but we're in a better position to come up with ways not to use them because we'll come up with game theory and mutually assured destruction, and we'll figure out a way to deal with nuclear weapons without killing ourselves.

And as an example of how development can create risks, the Soviet Union in the late 20s through the late 40s went through an absolutely explosive period of economic growth, one of the most rapid modernization processes we've ever seen, something like China in the modern era. Millions and millions of people moving off of very unproductive jobs on farms into factories. From a human empowerment point of view this looked absolutely fantastic, because you have lots of people escaping poverty, improved health, improved education. And probably it was a positive thing, but it also created some risks. Because the fact that the USSR industrialized so quickly meant that they were able to develop nuclear weapons very soon after the US developed them, creating again the potential for a world-destroying nuclear war, that wouldn't have existed, say, if the US had been the only power. In addition, the USSR was controlled by Stalin, one of the most monomaniacal totalitarian dictators ever, really evil guy, who became a lot more powerful because he had this enormous economy behind him. So the USSR developing wasn't an unmitigated positive thing, it created some risks to humanity as well.

So here I want to present the person versus person hypothesis, which is that most of the threats to the long term are human created. And I think is true, because except for pandemics, most natural risks like supervolcanoes, which could be very damaging, or asteroids, the annual risk is really really low. And we can easily recover from most of these things, like a supervolcano is very very hard for that to go out and actually kill absolutely everyone. The Future of Humanity institute has a paper forthcoming [JK: six months later I don't see it on FHI's publications page but maybe it's still not out yet?] about this, how anthropocentric risk is significantly larger than risk from nature, probably ten or a hundred times higher.

To get an idea about how would you go about modeling whether human empowerment is positive or negative in a person versus person world, it's definitely not easy. Because you have to think about what is the risk to human civilization proportional to. Is it a per-year risk, like you have with asteroids, where it's like every year there's a risk that an asteroid or comet is going to come by the earth. Or maybe some risks we face are proportional to the annual rate of growth, perhaps if we go faster then we have less time to prepare to changes and so we're less well able to deal with them when they arrive, in which case growing more slowly would be better because we have more potential for forethought. Or maybe it's per-year so you think with the nuclear weapons example we have a risky time between when you invent nuclear weapons and when you invent a technology the neuters nuclear weapons like mutually assured destruction which stops us from using them. In that case, if it's just in between a transition between two things, then going faster is fine, because you're shortening the time between when invent the problem and when you solve it, in which case you want to go fast. But basically the modeling gets tricky here pretty quickly, and so it's hard to come up with really overwhelming argument one way or the other.

But another thing that might be relevant is the ratio between human prudence and human power, our technological ability. One idea that you might have here is that we should only obtain technological abilities once we're ready to deal with them, that's the thing that's going to limit the risks to humanity in the long term. We do want to have technologies, but only once we're able to use them safely. So, for example, we give kids scissors but we don't give them guns, because scissors are useful to a child and the risks they pose are not so large because they know how not to cut themselves, and even if they do it's not the end of the world. But we don't give them guns because they don't understand them and they don't yet know how to use them safely necessarily, so we wait until they're somewhat older and more mature. So the question I want to pose to you is, do we have the unity, the compassion, and the maturity to wield new technologies of mass destruction. I think the answer is pretty clearly no, but if you're not convinced [shows slide with an animation of Trump pointing at his head and saying "we need brain"] this genius who can tell you, I think he's got a very strong case here, that we need to have more brains before we develop technologies that are extremely risky to us, a very trustworthy fellow.

So this would suggest a different approach to going about doing good, which would be differential speedup. So if you're trying to do differential speedup rather than just general human empowerment, then you want to be thinking, what things do we most need before other things. What is it beneficial to have first? And also, what things seem least likely to backfire, at least if we get them immediately.

This leads to the question, what are signposts for a good future? And again, I'm basically stealing a bunch of this analysis again from Bostrom, a talk that he gave in Oxford two years ago. [JK: I can't find a recording of this talk; anyone know where to find one?] He went through a whole lot of different possible things like proxies for good long-term outcomes, that we could measure in the short term. An analogy that you could have is with a chess game. Early on in the game, obviously you want to be capturing your opponents pieces, but you can't always see exactly how the game is going to go later on. You don't know how it's going to end, and that's not how good players figure out what the best next move is, they don't map it out to actually checkmating their opponent. Instead, they're looking for proxies in the short term, like do I control a lot of the board, for example, or am I capturing my opponents pieces. So that's the kind of thing we're looking for here. Things that we can actually measure in the short term, but are a reliable consistent guide to whether we're making the future more promising.

So here's a lot of things he put up, I don't have the time to go through all of them, but you notice, for example, that economic growth has a bit of a question mark next to it because it's just not really clear whether faster economic growth is so good or bad in the long term. And here's some of the ones that did seem more promising, possible promising signposts to guide us into the future.

One would be biological cognitive enhancements, so making people smarter in the hope that they'll be able to deal with future challenges in a more wise and prudent way than we are. Something like education, but more so.

International peace and cooperation, to prevent conflict, because a large way in which technologies can go wrong is if they're used as weapons of war, or used by people against others.

Solutions to the AI alignment problem, obviously. If we're going to create machines that are smarter than humans then we want them to be aligned with human interests, and there's good reasons to think that if we don't make a special effort to do this then they're not going to be aligned with our interests.

And then one that I've added in is better moral attitudes, so trying to encourage people to care about the welfare of all. I think it's harder to see how that could backfire than it is to see how economic growth might backfire. Though they both have some similar values. So if we can get future generations to care about everyone equally, to be very compassionate, and not just pursue their own selfish interests, I think that's a reasonably good signpost for improving the future.

And an interesting thing to observe, is this whole framing about differential speedup actually brings effective altruism somewhat closer to traditional ideas about how you might do good. People sometimes say "why are you just focused on curing malaria, like is that really the best way to change the world?" I think sometimes people have a point, that that just might not be the best way to go about it. More traditional ideas might focus on wise leadership of a country, capacity and institution building to deal with problems, improving people's moral attitudes, and also just being wary of rapid change in a way that many of us are not. So small-c conservatism, being worried that if everything is just upended and we totally change society overnight, and we should be crossing the river by feeling the stones, so to speak.

So what do I think are probably the most important causes. My guess is that things like working on risks from biotechnology, AI value alignment, climate change, preventing war and promoting peace and a sense that we ought to be trying to cooperate with each other and avoiding conflict, improving intelligence within government, like forecasting the future and making good collective decisions, and similar such things, which I think are probably a more reliable guide to improving the future than simply trying to increase GDP growth.

What about reducing poverty? Am I saying that reducing poverty isn't good? No, I don't think that's the case. Because reducing poverty also raises global sanity, through more education and people who are smarter, which leads to more cosmopolitan moral values, and leads to better government as well. People who have put a lot more thought into this than me generally think that it's probably good overall. So all I'm saying is it might not be as good as it initially appears to be, but I'm certainly not saying that it's neutral or negative. And if you'd like to explore this more, a really good thesis is On the overwhelming importance of shaping the far future, where one of our trustees, Nick Beckstead, concludes that improving economic growth is probably positive, though maybe not as positive as it might first look. Another one is the Moral consequences of economic growth by Benjamin Friedman, talks about the changes you get in a society when it stagnates economically, and how you often get quite rapid moral regression, people reverting to more tribal values, being less willing to cooperate with one another, and less empathetic. So quite an interesting book, from about 2006, I think.

But something to note is that poverty isn't that neglected in the scheme of things, it's not a terribly unusual cause. Which is one reason that we talk about it a lot, because it's easy to explain to people that it's good if you save lives and reduce poverty. But reducing poverty absorbs something like more than half of all effort by effective altruists, certainly more than half of all donations. So it's a really large focus are I think, maybe relative to the strength of the arguments. And it's a significant fraction of all actions by the poor themselves, of billions of people who are in relative poverty or global poverty, they're trying to get out of poverty themselves. This is something we should consider when thinking like, is this really a neglected opportunity. It's true they don't have necessarily the same resources, but when you add it all up I think there's a lot of work going into trying to prevent poverty. Many foundations are focused on this, including many of the largest, quite a lot of government aid. Poverty is neglected relative to some things, but it's probably not among the most neglected problems in the world. If you compare that with, say, work ... how many NGOs and foundations are there working international coordination, new dangerous technologies, peace, improving forecasting. This kind of work is reasonably obscure by comparison, so I think there might be more low-hanging fruit here, because fewer people are working on it. Lots of people say they want to end poverty, like if you meet a sixteen year old, but if they say they really want to improve forecasting ability within the intelligence services, that is not a common thing for teenagers to dream of doing with their career.

In addition, I think these other ways of doing good can be quite a good fit for us. Effective altruism is sometimes accused of being filled with elites, and I think that is potentially quite problematic in some ways, that we can be very out of touch, like maybe we just haven't experienced poverty ourselves so that could blinker us. But it also creates some opportunities potentially, if we have a lot of connections with people like in government or within academia. So I think effective altruists, many people here, would have an unusually good shot at guiding governance and public service, rising to top levels within important institutions within society. Guiding specific new technologies, because we're particularly clued in to what things are being developed in the next five or ten years, thinking about how can we make sure they're used in wise ways rather than risky ways.

And we might be in a good position to improve society's moral values as well. Though I think this is a bit more questionable, like many of us might be out of touch with a lot of people in society. Like, I personally often don't feel like I'm super in touch with a lot of people, so maybe on one hand we have like potentially a large audience, but are we actually very good at persuading most people in society to change their moral values and care more about people overseas? I think that's an open question.

So the bottom lines here are that indirect effects are crucial, though they're very hard to estimate. I think peace and kind of collective wisdom are somewhat underrated by people in this community, and probably that there's an excess focus on economic growth and health relative to other cause areas that might have more reliable signposts to improving the future and be somewhat more neglected. So the cliche is to say "further research is needed", but I think further research really is needed this time, because this could be one of the things that is causing us to actually not do things that are terribly valuable. Of course people have known about this problem for years, I started working at the Centre for Effective Altruism for years ago, this isn't a new concern, but because it's bit demoralizing to think about how hard it can be to predict the long term effects of your actions and how hard it is to have really good insights here, this topic goes a bit neglected in my view. I think it would be valuable to get more smart people really thinking about this, putting in months or years of work, coming up with their own ideas and their own models for how we can improve society in the long term.

All that said, given how hard it is to think about this topic, the above might all be misguided. I wouldn't, like, give too much stock to any specific thing I've said, but I think the overall issue is quite important. So I'd like to have great conversations here in the rest of the conference.

Comment via: google plus, facebook

More Posts:

Older Post:

Newer Post:


  ::  Posts  ::  RSS  ::  ◂◂RSS  ::  Contact