Flow Through Effects Conversation

August 31st, 2013
ea, transcript
On August 19th Holden Karnofsky, Carl Shulman, Robert Wiblin, Paul Christiano, and Nick Beckstead had a phone discussion about prioritization and flow-through effects. They recorded it and posted the audio. I made a transcription:
R
This is a conversation between GiveWell and Giving What We Can and a few other people who are part of the broader charity cost effectiveness discussion on the internet about flow-on effects and cause selection and whether we should be investing more resources in high-level cause selection and considering the far-future effects of the things we do now. This is a bit of an experiment between Giving What We Can to see if we can to see if we have useful conversations and improve one another's ideas and reassess priorities and to some extent we'll be acting as advocates for the conversation but always saying things that we believe but always trying to push the point and see how strong it can be.

A bit of background for the debate. There's been a lot of conversation in Oxford about cause selection and potential flow-on effects of our actions, and GiveWell has also written various blog posts in [?] history about those issues. Paul Christiano, who is also on this call, wrote a long essay about flow-on effects and cause prioritization suggesting that it had the potential to be a fruitful cause.

So yeah: we'll just go around and introduce everyone.

H
Rob, quick question: is there a reason you're using the term 'flow-on effects' instead of 'flow-through effects'?
R
No, that's just a mistake.
H
Ok.
R
So I'm Robert Wiblin from Giving What We Can.
N
Nick Beckstead, from the Future of Humanity Institute and the Centre for Effective Altruism.
C
I'm Carl Shulman, a research fellow at the Machine Intelligence Research Institute and a research associate at the Future of Humanity Institute.
P
I'm Paul Christiano, and I'm a graduate student at University of California, Berkeley.
H
I'm Holden Karnofsky, cofounder of GiveWell.
R
Paul, if you could maybe start by describing what you're proposing in your essay and what relevance do you think it might have to people's research priorities in the medium term, if any?
P
Yes, it's the basic argument that we're making, or that I'm making here. As it is currently very little is understood about the real long-run effects of human empowerment in the form of poverty alleviation or economic development or faster technological progress and that those considerations seem to be very important in understanding which causes are worth supporting. We need to understand both where we can push on the world and understand what kinds of pushing are going to have a large positive impact. And so on the current margin it seems worth doing is putting many more resources into that problem. For GiveWell in particular it seems like GiveWell certainly doesn't want to change what it's doing in a big way because you guys seem to have a very good thing going right now, I think we're all pretty enthusiastic about that, but on the margin it still seems worth attending somewhat more to these questions which you currently leave very nebulous about what is the actual effects of poverty alleviation, or why we might expect that to be good or, how good we might expect it to be compared to the good of economic development or compared to improving the quality of research in science and so on.
H
So Paul, when you say we've got a good thing going now and we're hesitant to change it, what exactly are you referring to?
P
I guess there are two parts to what you're doing which I think all of us like a lot. One is the really critical look at particular charities to really understand what are the moving pieces and what you would have to understand to evaluate impact and the second is the sequence of shallow overviews you're doing now, looking at a bunch of broad areas, trying to see what's going on there at the moment. I think both of those things seem to be very good, excellent projects.
H
Ok, thanks for the clarification. So I think there's a couple of distinctions that it would help a lot to kind of make at the outset. One of them is, to the extent that we've got a good thing going … GiveWell has done a lot of work on and has kind of built a reputation around this kind of global poverty alleviation, direct aid, proving cost effectiveness, scalable model. And I think that's something that I think we have kind of an audience for and a reputation around and it's something we wouldn't want to sacrifice. On the other hand the shallow investigations and the GiveWell Labs work in general, which consists of more than just the shallow investigations, is really unestablished and it's really new, and it's really exploratory. GiveWell Labs is us trying to take the reputation as people who are kind of critical and thoughtful about giving, and transparent, and make a bet on that reputation and say that we can go into a new area. But in terms of the specifics of how we go into new areas I wouldn't say there's anything about what we're doing that's established or that we are hesitant to change in the sense of inertia or in the sense of not wanting to disappoint people who follow the brand. At this point we're very uncommitted, very open, and want to do the optimal thing in terms of using our time effectively in order to find the best ways to give. So that's one distinction that I think's worth making at the outset.

Another distinction which I think maybe just has to do with how I read Paul's essay or maybe I just didn't read it right, but I think it's worthwhile ... Paul's essay, a lot of it, kind of reads to me like what we call an argument for strategic cause selection, which is this idea that you should be looking at lots of different very broad areas which a philanthropist might go into, and try to decide, in some broad sense, which one has the most value. That's something I don't see a lot of groups doing, it's something that we're trying to do, it's something that you could argue, that I would argue, is valuable, I think Paul argues is valuable. Then there's this specific aspect of that which is how do you evaluate the causes? There's lots of different axes and lots of different criteria by which you could evaluate a cause, and so the places where I think we would be more likely to have a disagreement would be how much of that analysis should be focused on questions like what is the long run impact of economic growth versus the kind of questions we're tackling in our shallow overviews.

So those are distinctions I would want to make at the outset. If you'd like I can kind of talk a little bit about the philosophy of what we're doing now in Labs and what questions we think come first and which would come second, in case that clarifies anything? Or maybe we could start, alternatively, with you guys telling us what you wish you saw more of from us.

R
Yeah, we're happy for you to do that.
N
... for you to describe your priorities more.
H
First let me talk a little bit about process, because shallow investigations are the main thing we publish but they're not the only thing we're doing. It's probably helpful for me to describe the process of Labs. It's something we've kind of laid out in blog posts, we've definitely talked about in one or two of our research meetings, but it's probably helpful for me to just run through it pretty quickly. So first off, one of the important things that we've decided for the moment with GiveWell Labs, and it's a kind of provisional decision, is that we're looking for promising causes. That is a different focus and a different style of analysis than looking for promising charities or promising projects. We've written a lot about why we're doing this, but a lot of the reason we're doing it is because we feel like the right order of operations for a philanthropist is to first, as carefully and systematically as you can, pick the broad cause areas you're going to get involved in, and then really hire people that specialize in those areas. The whole point, or the whole definition, of a cause revolves around the idea of it being a logical thing to specialize in. In other words, if we're talking about malaria you can kind of specialize in malaria, and once you know a lot of the organizations and the people and the ideas relevant in the malaria world, then every malaria project becomes easier to evaluate, substantially, with that knowledge. By contrast, if you then look at a project in the domain of open science, your malaria knowledge won't help you very much. Causes are kind of things that you can specialize in, things that you can become an expert in, and we kind of see our role as first systematically and carefully picking good causes to get involved in and second finding the right people to really become or even start off as experts in those causes, and develop very cause-specific strategies that are informed by a lot of cause context and cause expertise. We think that's going to be the best way long term to source the best projects, and is a very different strategy from if I, Holden, were looking individually at a project in one cause and a project in another cause and a project in another cause to try to decide between them and kind of have to get up to speed on each cause separately. That model is something that we originally envisioned ourselves doing but now don't, so that's the kind of basic thing that's going on here.

In terms of what we're actually doing, process wise, the shallow investigations are what they sound like. They're very quick investigations of a cause, they're usually just a couple phone calls. What we're trying to figure out is according to just the conventional wisdom of the experts in the field, what is the problem, how tractable is it, what are some things that we could do, and then who else works on the problem.

Then we also have, and this is all kind of a spectrum, I mean none of these terms are really rigid, but it's helpful to think about medium depth investigations. These are asking basically the same questions as a shallow depth investigation, but instead of talking to like two or three people and reading two or three papers we're talking to like 30 people and taking months on it. The goal there is to get a kind of representative view of what people think and not be as much at risk of only talking to a couple people. However, there are other things we're doing that are kind of cross cutting and are kind of different from those. So one thing we're doing is, I am trying to understand just the basics of philanthropic involvement in fundamental scientific research and in political advocacy. Those are two, you might call them super causes. They're big worlds that we know very little about, and we want to know more about how philanthropy should attack those areas. And so it's like to look at any cause within basic science, to look at aging research, AI research, biomedical research, we'll want to feel more literate, in general, about the world of philanthropy's involvement in scientific research than we are currently.

Similarly with politics. Whether we want to do factory farming or labor mobility or whatever, just have a sense of the history of philanthropy's involvement in politics and how it has gone well and how it hasn't, what works and what doesn't, and what the basic tools are and what the basic techniques are that have been used in the past successes I think it really important. That's a lot of what I've been focusing on.

Finally, there's a couple other cross cutting initiatives that are really targeted at being good philanthropists on the whole. There's the history of philanthropy project which is trying to find everything we can about what philanthropy's done well and poorly in the past so we can think intelligently about what we're likely to be able to do well and what we're not. Then there's this more present oriented investigation which is getting to know the major foundations which exist today, understanding what they fund, what they work on, how they think. We've been doing cofunding, and by the way Good Ventures is very involved in all this work that I'm describing on labs and has particularly been involved in getting to know other foundations and in doing cofunding projects with them, which is basically one tool of learning about that and how they work.

So that is our process at this moment. It's a very high level description and it will be off in many of the details, but that's the basic framework we're operating in. I'm going to pause there, and maybe let's start of with you guys asking any questions you might have about that and get to whether there are things in that process that we ought to trade for other things.

N
I have just a quick question about the political advocacy and scientific research supercauses thing. I've been following what you've been putting online, I think, mostly. I'm just wondering how that differs from the shallow cause overview. Do you think you could say a little about that, like the methods you're using?
H
Well a shallow cause overview is you'll take a cause that's kind of, one cause, like factory farming or something, and you'll talk to like two people or three people. You're going to try to pick them intelligently but you basically want to talk to people who know the field pretty well and say 'who works in this area?' and 'can you just tell me the basics of what people are trying to do?' It takes something like twenty hours because you do the conversation, you do the conversation notes, you read anything they might send you, and then you do a write-up. The politics and science work looks very different from that. On the politics front, what I've done is tried to find people who are generalists in terms of understanding philanthropy's role in politics. That's been very difficult. I've found a few and have had very broad ranging conversations with them, but I've also done a lot of background reading. I've been looking very hard for literature that can kind of tell me about the role of philanthropy in politics and generally about how political interest groups work. So there's been a lot of background reading and then a lot of much more extended conversations with people, not like a couple hours, and there's been more of a struggle to find the right people to talk to, to make sure we're finding generalists. Generally it's been a bigger project, but not in the way a medium depth is where I kind of try to talk to 30 people. It's much more a case of trying to have really extended conversations with people, really looking into a lot of what's out there available for reading.

It's a bigger time investment in a broader set of things and so the percentage of what I'm doing that I understand is probably smaller but the time I'm investing is larger but that's because I'm trying to understand something bigger. That's on the political advocacy front.

On the scientific research front I've had probably five conversations so far with major funders trying to understand how they see the scientific research world but I've also done things like, Dario Amodei is like the informal scientific advisor to us, I've just spent a lot of time with him, just trying to absorb his kind of intuitive knowledge of how the scientific research field works and on his advice I read half of a biology textbook, I went to a meeting on cancer research, and I've just generally been trying to absorb and immerse a lot of the language and a lot of the approaches of philanthropy's involvement in scientific research. The next thing we're likely to do on that front is really do a push to find better scientific advisors. One thing we've picked up is that in this case the scientific advisors are much more central to the process of investigating anything in scientific research than they are in other causes. Obviously you always want to work with experts but I think it works differently.

In both cases it's a bigger effort, it's more reading, it's more talking, more effort to find people who are broad which can be hard compared to people who know one issue, and there's a kind of undirected immersion component of it, which is that we're trying to figure out what the basic approaches are and what the basic tools are, and it's not always trying to answer the same three questions. Hopefully that kind of clarifies it?

H
Other thoughts or questions?
R
Paul, would you like to go now?
P
I wrote this essay, but I think the essay mostly contains the image that everyone on our side of the table, probably both sides of the table, understands and agrees with, at least the core aspects. So I don't think I'm in a really unique position here. I would be happy to say some things about what directions or what thoughts I would be happy with GiveWell making, but I think others could say equally valuable things.
R
I think the main suggestion I was thinking of making GiveWell would be that when looking into these causes to spend more time thinking about the long term effects rather than the short run effects. For example, with AMF, GiveWell has invested a lot of resources so far in trying to pin down precisely the effect on health and particularly on infant mortality, of distributing bed nets, but I think if we were going to look more at that topic in the future resources would be better spent looking at the value of reducing infant mortality on how a country might look in 50 or 100 or 1000 years rather than trying to get more precise estimates of the amount infant mortality is reduced per dollar.
H
I think that's definitely a valid suggestion, it is something we talk a lot about internally, we don't have complete consensus on it. This is a point that, maybe less strongly than you would make it, that some people on staff, certainly some people have asked about, and some people believe that we should maybe be doing more than we are. My main response, (a) I do think these questions are very very important so I'm not going to tell you I don't think they're very worthwhile, I do. If we had good plans for investigating them I'd be pretty interested in that though not necessarily wanting to do it at this stage.

The big issue for me, and the reason I would say we're doing all those things I just described instead of researching questions like "what is the impact of infant mortality on outcomes several decades down the line" has to do with the return on investment of investigative hours. This is a concept we've covered in one or two of our recent blog posts where I kind of feel like if you're going to spend an hour investigating a question, the importance of the question should be a major factor in which question you choose but also the tractability of the question to further investigation, and that's where I'm just kind of not sold on these questions that I agree are very important. Let me just lay out how I feel about what would happen if we went two different paths.

On the path we're on I think we're learning at a pretty good clip. The politics and science stuff is the hardest to really have a progress meter for but it's not going to be long now before we have a pretty good framework for thinking about those things and can break down both politics and science into smaller causes and can take the shallow plus medium approach to them. And then within the shallow and medium approach I think the learning is really good. We're getting a good sense of which things are crowded and which aren't, which I think in some ways is my favorite question to ask right now because although it's far from the only important question it's a very answerable question. It clearly matters a great deal which areas are kind of crowded with other philanthropists and which are kind of opportunities waiting to be picked, so I think that's very valuable.

I also think as we're doing this we're also gaining a more nuanced understanding of just about any cause we investigate. When we do a shallow we kind of go from knowing how much you would know after reading a New Yorker article on the cause which is just an incredibly superficial, to at least having talked to a couple experts and having a little bit better more realistic picture of what people are doing, what the debates are, and what there's still room for a new philanthropist to do. And then when we do a medium I think we get a fairly sophisticated view, and the result of that is that if we were later to start looking at long term effects of this cause or that cause or that cause we have a better sense of what it is we're trying to analyze.

I guess I draw an analogy a little bit to the charity cost effectiveness where the cost analysis is just much easier to do once you know you're analyzing AMF. You know what countries they work in, you know how much they're paying for nets, you know how they're distributing the nets, you know what the concerns are, what they're not. To do cost effectiveness estimates for bednets or deworming in the abstract vs to do them for a particular charity, they are different endeavors and the more of the subtleties you understand I think the better job you'll do, and I think a similar analogy holds for these kind of causes. We're gaining a fuller picture of what they are, and it's much easier to analyze how something affects the far future or how cost effective it is when you have a pretty developed picture of what it is and when you're not going on a naive or cartoonish "I heard about this once" description of what something is, which is what I feel like we'd be doing without these shallow and medium depth investigations.

So that's the path we're on. In the future I could definitely see us, if we figure out a good way, I could see us putting a lot of effort into these questions. But to think about the long term impact of scientific research it seems pretty important for me to know what kind of scientific research we're talking about and where in the scientific research we're more likely to get involved.

Then if we go the other path, the thing I'm afraid of is that we'll take on these very important questions but we won't really get anywhere. We'll kind of take these back-of-the-envelope calculations, they'll say something but we won't really trust them, and so I won't really feel comfortable ... I think it's very unlikely that I would feel comfortable deprioritizing a cause that's intuitively promising and not very crowded because a back-of-the-envelope calc about what's going to happen thousands of years from now told me that another cause dominates it. I just have trouble believing that I would trust the back-of-the-envelope calc that much to make me discard the idea of investigating the cause and wouldn't believe that investigating the causes further would change my picture of the back-of-the-envelope. That's based on my past experience with cast-effectiveness estimates. I have kind of a sense for how much those can change when you get into the details, they can change a great deal, and so I like to get those details and know what I'm comparing before we do it.

Those are the two paths I'm looking at. One of them I feel like we're going to pick up real information, we're going to make real progress, and it will affect in a positive way the way we do these long term analyses and cost effectiveness calcs later, if and when we do them. Whereas, if we go the other direction, I'm just afraid we'll spin our wheels a lot, come up with stuff that's basically guesswork, and then I won't want to use it to prioritize cause, and so it won't actually change the direction of the other stuff at all. The classic thing here is, we're interested on all fronts. I'm interested in factory farming, I'm interested in scientific research, and I'm interested in bed nets, and to compare those three things I'm having trouble imagining that there's a comparison I can do that will convince me that one of them wins with how little I know about all three of them right now.

R
You mentioned factory farming and the AMF; here's a potential example that's brought up a lot here [in Oxford] which is that reducing poverty seemed to have significant flow-through effects into the future because you're empowering people, improving their health, probably leading to some amount of economic development, which would allow them to be richer to do more useful things in the future.
H
Yeah.
R
Whereas factory factory farming, however bad it is, reducing the consumption of animals from factory farms today doesn't seem like it will probably have the same level of flow-through effects because improving animal welfare doesn't really empower any of the important actors in society over the next century.
H
Yeah
R
You can work that out very quickly, and that's one of the reasons that many people do think that improving animal welfare, reducing animal suffering, has much bigger short-run effects but in the long term doesn't seem to have the large flow through effects and so they deprioritize it.
H
Yeah, I agree with everything you just said. This is a live GiveWell debate, so I don't think everyone at GiveWell would ... I think some people at GiveWell would agree with your conclusion. I agree with everything you just said but not with your conclusions. Basically, I agree that the health stuff is targeting people and therefore the case for flow-through effects is much stronger and so naively I would expect the effects of doing something about factory farming to be much lower, and that is why, if I had to guess now, I'm certainly feeling that I'll be more interested in the stuff with the higher flow-through effects. I'm feeling that I'll end up thinking it has higher flow-through effects and therefore being more interested in it because to me flow through effects are very important.

The part I don't agree with is the conclusion that we can kind of close the book based on that analysis. Part of my thinking here is that when you look at, and this is a lot of the area where I end up disagreeing with a lot of the way that other people who are concerned about the far future frame and analyze things, is that when I look at ... Let's look at "astronomical waste" for a second, the idea that we have a potentially unbelievably bright future in the future of humanity and can colonize other planets and things like this. When I think about how we got here, it just seems like we got here by doing a lot of things that weren't really explicitly aimed at getting here, that no one could have predicted would get us here. There was a lot of stuff people did that was just kind of like "I made this thing that can make us more productive," and that's good; it's good to be more productive, and it helps people. The sum of a ton of those things put us in a position where now it doesn't look crazy that we'll someday colonize the stars, and without that kind of thought I'm not sure we ever would have gotten there. So when I look at what I believe are the most impactful people and the most impactful actions in human history, even if you want to restrict it to things that are impactful by improving the prospects for the far future a thousand years out, a lot of them just look like people who tried to be helpful or even just tried to help themselves. It was more because they did something incredibly well that they got some unexpected result. They did something really novel, really creative, really well and they got some totally unexpected result, and that is what has accounted for a lot of the progress.

So when I think about that stuff a lot of what I keep coming back to on this factory farming stuff is the possibility of unknown unknown flow-through effects. There's all the things I can tell you about how helping people with bednets might speed the pace of development, which might improve economic productivity, and all that stuff. But there's also just this big block of things where if you just help people and you do it really well, or you just do things that are heuristically good and you do them really well, things that we can't think of yet will happen that will be good. I think there's been real good created by people in entertainment and people analyzing sports and just all kinds of stuff. It doesn't mean that I think that's the most good and it doesn't mean ... we're not having people do that, but if you're looking at two causes and one of them is really dominant in terms of how big an opportunity there is, in terms of what you'd actually be able to do in terms of how big a difference you'd be able to make, I think then there arises a strong possibility that your flow-through effects are big in a way you just can't see. So with factory farming I think there is some case for flow-through effects but I think most of the case is just stuff I haven't thought of yet. I know that may sound like a somewhat crazy view, but that is definitely my view at this time. I'm just not, especially with how little I know about the whole factory farming landscape, just not ready to say that I can conclusively determine that there's basically no flow-through effects here and so I'm not going to look into it further.

R
I'm one of the relatively [?] there might be significant flow-through effects to animal advocacy, but I'd be interested to ask Carl to talk here because I think you'd probably say similar things to what I would say but better. Do you mind to talk now?
C
Sure ... Was that "Carl" or "Paul"?
R
"Carl".
C
So, we've recently been having a bunch of discussions at Oxford on the question of ... so you can say look at all these various good things happening now, and this good thing if it goes forward then we'll have various good consequences in the future but we have a lot of different options for what metric we might use as a local indicator of goodness. One of those might be if we could say dollars of GDP, economic output. For things like funding pharmaceutical research, research into making better chips, space travel, and improvements in agriculture, a lot of that grows with the size of the world economy. Total pharmaceuticals goes up by a billion dollars, then that's going to increase R&D in the pharmaceutical industry. So this is a case that we should go to dollars of economic output.

But we might instead say that looking at individual people's lives happiness does not go linearly with dollars of economic output. Maybe your impact on science and technology by way of funding research into pharmaceuticals but your happiness grows more like the logarithm of your income. One measure would be Quality Adjusted Life Years, another we might use would be log(income) or an approximation of log(income), we could look at total income, we could look at population, we could look at per-capita income, or a more complicated [?] of the distribution. A lot of the improvements we've seen in the world over time have been correlated with ... if you look between countries, say at the frequency at which wars happen, or the quality of political institutions, these seem naively to be very good things, have some nice flow-through effects, and they tend to go more with the per-capita income of the country or average(log(income)). You might even look at features like, what fraction of the worlds economic output, total capacity, is distributed and how unequally it is distributed. If you look at, say, India and Pakistan, which are very large countries which have lots of economic capability because of their size, they both have nuclear weapons, they're able to afford nuclear weapons because of their size, but there's a relatively high amount of instability and poverty there. You might think that one of the factors affecting the global stability of the world is do you have a large amount of military potential relative to the amount of people, so we should really focus on improving the quality of institutions in countries that are large and relatively influential. This would be a case not to go for world GDP but to bring up per-capita income in poor countries. There's a sort of trade off in that you want countries to become more stable, more peaceful, generally better places to live. We think that we tend to improve the situation for the whole, and then there's a bit of a tradeoff in that capabilities for destruction go up at the same time. And that's something you might look at if you're comparing say migration and [?]. So it seems like there's fairly a large variety, at least a dozen, of plausible kind of metrics that you could use. Ordinary goodness, things that look like they have a lot of flow through effects. A lot of interventions that seem to be very promising, that seem to be wow this is kicking one particular metric off the scale, often the way it works is that it's simultaneously shifting other metrics in the other direction and that's part of why the intervention hasn't been widely adopted and endorsed.

H
Carl, are you still going?
C
To try to make it really concrete. So in the global health area [?] if you take the interventions that have bigger effects on productivity, earning and impact, but not very large impact on mortality.
H
Yeah.
C
Like micronutrients, maybe deworming, versus something like ... Malaria also, we think has other impacts on productivity and not just mortality, but there's definitely a varying degree to which the interventions are going to affect the income and productivity of existing people, the extent to which they're going to change the size of the population, the extent to which they're going to affect per-capita income, total income, etc.
H
Let me just throw out there real quickly, we;'e actually been having the same discussion about log vs income, like are we trying to maximize world GDP or are we trying to maximize world happiness which is like log(income) or something. I think it's clearly an important debate in that your view is going to have a huge impact on what you choose to fund. The question that I have is, are we going to get to analysis of this question that is robust enough to heavily shift priors. I think what's quite likely to happen is that we're going to end up saying that some of these are just giant judgement calls: if what you believe is that total GDP is what matters then this is where you should be, and if you're much more focused on kind of happiness then this is where you should be, if you're more worried about existential risks than insufficient technological progress then this is where you should be, and then we'll do whatever analysis we can and provide as much analysis as we can, and we'll make the decisions as informed and intelligent as we can. Ultimately I think we and our donors will make their own decisions on that. That's what I think is a pretty likely outcome. If I didn't believe that, if I thought we could kind of like "close the book" on some of these questions then I would be trying to close that book. That would substantially impact which causes we'd want to estimate and would save us a ton of time. It's my skepticism that we're going to get robust enough answers that makes me prefer the order that we're going in.
C
It seems to me there are some kinds of empirical information that you could be collecting.
H
Definitely.
C
[?] how the different interventions are affecting these metrics. To take the example of AMF, there's a total utilitarian case that if you're increasing the population then you're increasing the total happiness along with total GDP and so on. Your estimate of the effects on total GDP are going to depend significantly on this debate about if you save a child from malaria how much do people reduce their fertility.
H
The question of the relationship between mortality and fertility and between mortality and economic growth, those are things that do have an economic literature on then. We have looked that that literature and kind of come to the conclusion that a lot of people have been trying to get somewhere on this for a long time and haven't gotten very far, and haven't gotten us to the point where we're going to be able to conclude. What I do agree with is eventually, although I don't think it's an issue now, we want to provide all the information we can obtain and all the information we do have and say "we're guessing here, this is what we're guessing based on" and help other people make the same guesses.

When I think right now about what our priorities should be, there's this information that's having actionable effects on what we're exploring further and what we're not, and When I think right now about what our priorities should be, there's this information that's having actionable effects on what we're exploring further and what we're not, and then there's that investigation that I think would serve a valuable clarificatory role but wouldn't be as likely to affect our process or our priorities. That's why in terms of just pure efficiency and speed it seems better to go in the other order and to save that analysis for the stage where we're actually recommending things and we're going around talking about them and arguing about them, that's when we should start filling in that picture. Or maybe prior to actually making the recommendations, but maybe as a last step before doing so.

C
It seems like this is actually an issue [?] your current top three charities ...
H
Yeah.
C
... in that it's very difficult for an outsider because, for example, the effects on the size of the world economy and even the total amount of happiness, depends so heavily on these effects on population. Imagine the extreme case where ... if some people say that actually the fertility response is enough to swamp the effect. That if you save a child from malaria you actually reduce the human population ... negative flow-through effects from that. This is a view propounded by a number of people in the field. An outsider is looking at GiveWell's recommendations and they know there's some mix of judgment calls [?] the literature on that. It's very difficult to know whether to trust that or whether we should just wait until that becomes clear.
H
Yeah. We do cover that last question you asked. That is in an FAQ, though that's unusually not written up by our standards. We have done a lot of work internally on that question of how does mortality affect fertility but have never really had it high enough up on the priority list to write it all up or even fully finish it. It just really wasn't ... the studies are just really problematic. But we do address that in an FAQ so it's not like we're silent on the question.

I do think you're absolutely right that if the thing GiveWell were maximizing was the quality of the analysis on our top three charities and how to choose between them I think you're absolutely right that we should be doing that sort of thing, because you're right, cash, deworming, and malaria control are all very different things, and some of the differences have to do with mortality vs earnings, but we have made an organizational decision that that's not our primary goal at this time. If we tried to always do everything we could possibly do to make those kind of traditional recommendations as good as they could possibly we could spend another 20 years working full time on that and pretty likely not even change the order of our recommendations because that's kind of been the pattern. The outside view so far is that we did a ton of work last year and it didn't shift us anywhere. Meanwhile, we wouldn't be exploring these other causes. We wouldn't be making progress on understanding the options of scientific research, and politics, and global catastrophic risk prevention and mitigation. When we think about our priorities, I would say that we're trying to uphold about the level of quality we have right now on our traditional product, but it's not a priority for us to substantially improve that quality. We're taking the resources that could go to to improve that quality and those are really going to GiveWell Labs.

I think our analysis on our top three charities is very good, I think it's very helpful, I think it's better than anything else out these, but it's not close to as good as it theoretically could be. Rather than invest in making it that good we're choosing to take those resources and invest them in GiveWell Labs instead. And on the GiveWell Labs front that's where I feel that these questions ... and I do, I think I've made pretty clear, but I think there's a reason that we've avoided these questions, and we have avoided them, because to me they come last because they're not going to affect your workflow, they're not going to change your priorities. They may clarify your conclusions, they may affect your ultimate giving decisions, but they're the kind of thing that for any given process I think you want do to that analysis last. Again, if the priority were making GiveWell 1 as good it possibly could be perhaps that's what we'd be working on now, but since the priority is GiveWell labs they come later in the process.

C
The [?] caveat to that is in guiding the collection of information, especially for the medium depth or more investigations ...
H
That's exactly what I'm saying, that I'm too skeptical that they are going to have that influence. My experience with the sorts of analysis you guys are describing, and this is very much outside view type reasoning, it's based on our experiences with having these debates and looking through the literature on these debates in the past, is that you never end up with something that's robust enough to change your priorities and your work plan. You can end up with things that are very interesting, and that may affect how different people give, but they'll affect how different people give in different ways. Reasonable people have very different spins on the evidence. I've never looked into a question like health effect on development or health effect on fertility, and felt that it gave me enough information to deprioritize a charity on that basis. And by analogy, I think looking at long term impacts of science vs factory farming is not going to give me enough evidence to deprioritize either one.
C
If we're talking about a rough value of an opportunity, so a particular piece of evidence doesn't necessarily have to have a ten fold difference in value from one thing to another. Are you saying that the update in the valuation of the opportunity has never been above like 1.5?
H
It's never been enough to make us ... I shouldn't say never, actually. Another form of this analysis that we have found helpful is the difference between developing and developed world, direct aid. So that's a case where that kind of analysis has kind of made us really prefer the developing world and really deprioritize developed world aid. But I think I have enough preliminary experience with the questions you're asking now to believe they're going to be more similar to the Malthusian stuff than to the developing vs developed world stuff. And you know me, it's not always about magnitude of adjustment so much as it is about robustness. It's not, does my back of the envelope say is this 1000 times as good or 10 times as good; it's more like how comfortable am I with the lower bound of the back of the envelope calculation being 1.5 times as good. And usually with these questions I just don't end up comfortable about that.
C
It's more as an all things considered, including the probability that the evidence is mistaken ...
H
Yeah, exactly, exactly.
C
Sort of, what's the biggest change, in sort of emotional attraction, that I was trying to get at ...
H
Sorry, say that again?
C
The change in your all things considered attraction to an option, it's never been such as to make an option seem half again as good, maybe since the developing vs developed ...
H
That's got to be the biggest one. That was a major change in my views over time. And I think that was one of those cases where I think it was really about the robustness. Like, it wasn't just ... we published this cost per life saved vs cost per year of school, and that was definitely part of it, but it was also seeing the international poverty statistics and understanding how different they are, and then going overseas and really believing that those numbers are not screwed up but those numbers are really roughly correct. Then kind of stress testing that by talking to a lot of people, and then coming to be pretty confident in this kind of vague feeling that direct aid to the developing world is going to go so much further. And that was a big update. I mean, I wrote a blog post several years ago about how US education was my cause of choice, and it's nowhere on the list right now. For me anyway. I mean, I'm not going to rule out that we look into it. And it's only certain types of US education, I should clarify. There are all sorts of things one could mean by that phrase. There was a particular kind of intervention we were looking into. And that was a big shift.

But again, like I said, that's why I don't ... there was a lot to that analysis beyond back of the envelope calculations that really convinced me. That's why I think when we start talking about what has more effect on the human race 1000 years from now, a small amount of marginal counterfactual progress in scientific research or a large amount of counterfactual progress in reducing factory farming, I mean I just don't believe I'm going to arrive at a firm view on that. I certainly have an intuition now, and I certainly expect to revise that intuition in light of new information, but to get to the level of robustness we need in order to say this thing is off the agenda I think is really unlikely.

N
One thing I am curious about is where you see the trying to look at the impact in terms of long terms considerations coming in, at what stage of the work-flow that does fit in? I mean I certainly understand where you're coming from if you say like I've got some ideas about which causes I think are good, and I've got a list of them, and I fix them, and I want to these shallow cause overviews, and we should do any long term effects stuff after we do these shallow cause overviews and maybe certain other things. It seems like the hope is, and I think you're right to stress what the hope is looking at the long term impacts, the hope is you can either write off certain shallow causes so you don't have to do those or you can decide not to do certain medium depth interviews or even like even much greater depth interviews of a cause that you might do later. So I'm just wondering where you see it fitting into the work-flow?

And just one other thing I wanted to comment on. I think you can distinguish between doing this type of analysis is enough to robustly make me think that cause A is better than cause B, and saying that doing this kind of long term consideration analysis is enough to have one consideration that strongly speaks in favor of cause A over cause B. I think it might be valuable because of the second thing and not because of the first thing.

H
But that if that's the second thing .. that affects when you want to put it in the process. The first thing you'd want to put it early in the process, if you expected that to go well. The second thing I think you'd want to put it late in the process.

Another thing that's worth acknowledging here is that I've thought about all these things and I've talked about them a lot with the people that care the most about them, and I've looked into them a little bit and I've kind of seen what the arguments out there look like, so that is a lot different than when I used to be into US direct aid, which was when I worked at a hedge fund, I didn't really know anything. So given the level of knowledge that I have, not only of what the facts are but of what facts are available and what analysis can be done. It's worth keeping in mind that that's non-trivial. I think there's a really strong possibility, I think it's more likely than not, that my views right now on things like science vs factory farming are just not going to shift significantly. I hope that's wrong, I hope we get really good information to help me think about stuff, I think there will be things that change our mind a little bit, but it's very possible that my views are not going to change very much from here. Unless new information comes out, like new things happen in the world, but further analysis of the facts that already exist I don't know. I think there will be subtle changes, but I think it's quite likely that there's as much more movement happening ... and other people on staff disagree with me about that, and it's certainly not a commitment, I'm certainly hoping to learn a lot more and to gain the kind of information that could make me more confident in things and could help me shift my views.

Given that, I think a very likely ... you ask when in the process we think is the right time. I think a likely sequence of events is that we do this process I've described and then we arrive at lists of priority causes. And then at that point we're kind of recommending to major philanthropists or to ourselves via whatever pool of money we can raise from individuals that we should be hiring experts to really get deep into these causes, and then at that point would probably be the earliest point at which we would start doing this other analysis. It's like the recommendations are already out there and we largely feel like we've covered the landscape we can cover in terms of shallow investigations and drawn what we can from it out of what causes look most promising, and then we've got other people really investigating the causes in depth. It think at that point at the earliest I think you'd want to start saying how can we refine our views of what causes we think are most important based on considerations around flow through effects, long term effects, etc.

N
So are you thinking ... you discussed shallow investigations and medium depth investigations and the kind of work you're doing on political advocacy and scientific research and so forth, is that like we've done medium depth causes of all the ones that we think are worth while or is that we've done something more substantial than medium depth investigations of these causes. I mean it's probably early to say but ...
H
It's a little early to say. I would say something along the lines of medium depth of all the things we think are somewhat promising. but th definition of medium depth could change, or there could be a new category between medium and deep or between shallow and medium, or there could be many categories, but if you're trying to get an idea I think that's a good conceptual way to think about it.
R
Paul, would you like to talk about your reasons for thinking this is something worth doing, even if not for GiveWell, at least for somebody, even if it's quite likely to fail?
H
Do you mind if I throw in just one more comment that I've had in my head and don't want to forget?
R
Sure.
H
It's just an analogy it might not be that helpful, but in terms of talking about my feeling about the unknown unknowns and about sometimes it may make more sense to just do something you know really well without worrying about how theoretically valuable it is, I think an analogy I might use to just make this a little more intuitively plausible, and I'm certainly not claiming that the analogy is perfect or that it proves anything at all, but if you're a scientist and there are a lot of different problems you might work on, you could also look at each of the problems and say which one of these do I see an interesting path on, which one of these do I think I'm going to be able to get somewhere on relative to what other people are doing? And hopefully, you guys can see it's not at all clear that the latter shouldn't dominate. It's a little bit hazy, in terms of how I actually perceive how a scientist should make decisions, there are fields and problems that just seem like clearly irrelevant and really unlikely to have social value, and those are maybe good enough to ignore even if they seem very tractable, but then there's a large class of problems that could be important but it's hard to say, and it seems like when choosing which of those problems to work on if you you have a really good idea to get somewhere on one of them it seems like a factor that's likely to outweigh any kind of highly uncertain back of the envelope calculations you do about relative importance of the problems for humanity 1000 years down the line. I don't know if that clarifies anything. That's not intended as any kind of a proof, it's just intended to clarify what I'm thinking, what my intuitions are.
R
It's interesting. I can completely see the analogy and it's [?] way of thinking about it, but I have a pretty strong gut reaction the other way. That choosing the question is more important than working out what you're better at. Or if you find that your comparative advantage is in researching questions that doesn't seem that important than maybe you should become an investment banker and donate the money toward someone ...
H
But Rob, I've conceded there are some questions where it's like, ok this really doesn't seem important. But lets say there's a large class of questions that all seem like they're probably important or they could be important and it's really hard to say which ones are more important than others. Let's say ... there a whole bunch of different kinds of renewable energy we're trying to work on, there's a whole bunch of diseases we're trying to cure, you have a bunch of options like that. I mean I agree with you, at the point where you can really say my skills are good only for these kind of useless problems, maybe I will go make money and give it to charity, that's one thing, and when you're really at the point where it's getting really hard to tell now, and I think that for a lot of decisions scientists make it's more the latter. I mean, once you're in a socially relevant field you end up with a lot of stuff that looks like the latter where it's really hard to tell what the social significance really is and everything looks like it could be a huge deal or it could be not a huge deal at all.
R
So I guess it's a question of how large that class of plausible contenders that you can't tell between is ...
H
Right.
R
... I'm inclined to think that it's smaller ...
H
Right, but you can see ...
R
... I've looked at a lot of medical research that's misdirected into diseases that aren't that important, that not that many people die of them ...
H
But see Rob, this is a place where I wouldn't agree with you. I think if you can cure a disease that's not that important I think a lot of the time that's going to involve insights that are very important to other things, that are very fundamental. That's exactly the point I'm making. There are classes of inquiry and fields where it seems really unlikely you're going to do anything, but I think the class of problems where you should choose between based largely on tractability is reasonably high. The whole point is that science is very unpredictable, and on the way to solving one thing you often solve another thing, and then that thing has effects you never would have anticipated and inspires someone else to do something else you never would have anticipated. And so if you see the world of philanthropic actions as having the right similarities to that world, and I see it as having some similarities, that it's very hard to predict the long term path of your actions, and it's also very ambiguous which of these causes is really better for long term effects. I think that hopefully illustrates what I'm saying.
C
One reaction that I immediately have to that is thinking about someone who winds up with a great idea or opportunity that they somehow found and they have enough evidence about the strength of this opportunity to think that it's much more valuable or at least a much bigger contribution to that field than is typical, enough to outweigh the differences for their field relative to others. Those don't usually fall from heaven, usually you get good ideas in the field by working in it, studying, exploring, experimenting and so on. Now maybe you're making that analogy to something about seeing about the room for more funding room for more talent in a different field, but at the level of ... I've a great scientific idea that I've got ...
H
No, no, no, I'm with you. I think there are two different versions of the analogy you could think of, one would be you're choosing a field and you're looking at your skills in which field they're suited to. That's like one place where I think my analogy would hold to some degree. And another would be you're in a field and you know the field and there's multiple problems that you know enough about to know how good your relative ideas are for them. I don't know which analogy is better, either analogy has some merit to it, but in both cases I'm largely thinking about room for more funding. We can get to know these areas well enough to get a sense for where there's really a contribution we can make and not just have that be a total wild naive guess from the outside but actually have it be a grounded opinion.
C
It seems to me that still there's ... someone that's going to be a good manager at a pharmaceutical company is probably also not going to be a terrible manager of a company that's more in the software ... the question is if we're talking about early on in their life before they've gotten the advanced educational credentials in computer science and biotechnology. That level of extreme difference in abilities between two fields seems like a somewhat unusual situation there ...
H
Uh ...
C
... just because ... mathematical expertise, a lot of these things are going to generalize. If we're talking about something like cancer biology vs human immunodeficiency virus, the prior aptitudes you would have are just not going to be 100 times more suited for cancer biology rather than infectious disease.
H
We're getting a little bit off topic. You can basically draw large clusters of things that skills are fungeable across and you can draw clusters of things that they're not very fungeable across, and I would assert that a lot of decisions scientists are making are following the framework I say, but to exactly pin down which fields who is choosing between is probably not what we should be doing on this call since I probably only should be on here for another 15 minutes anyway. I would be happy talk about it another time, but I think we're getting as little bit into the weeds.
R
I agree with that. I just wanted to raise a point that Paul wrote into his essay, which is that he looked around and tried to find people doing this modeling of flow on effects, comparing causes on the basis of it, and found very few people who were trying to do it know or seemed to ever have tried to do it.
H
Right.
R
One conclusion from that would be, well this is impossible and people are avoiding it for good reasons.
H
Yeah.
R
The other might be, well this is virgin ground and we might be able to make a lot of progress because no one's really tried before and we could have great unexpected success. I'm inclined to agree ... let's say we throw half a dozen people at this for a couple of years, probably at the end we'd be fairly disappointed and find that we haven't made a huge amount of progress and maybe we're not willing to shift where we're directing our money or our time. But there's quite a bit of upside potential there and if it works we can scale it up, if it fails we can close it down and take the reasons why it didn't work and make them public so others don't waste their time. I'd really like to see someone do an experiment with this kind of research, and a very public experiment so other people can learn from it, even though I think it's more than 50% likely to come to few concrete recommendations.
C
I think the general perception around here is that what limited work along these lines that the Copenhagen Consensus and GiveWell and such have done has actually been rather valuable on a per dollar basis. Paul raised also a good point, that if we look at the amount of human capital going in a sustained cumulative process where people build on each other it's ... these areas of medicine and economics and such ... just orders of magnitude. And if we look at what gains people had early on in those fields and compare. This is a field but at the level of society. It's not necessarily where the very most marginal dollar or very most marginal person in the effective altruist community ... there's something odd about that.
H
Yeah. First off, just to note, if we're talking about how GiveWell should prioritize it's time, that whole process that I laid out, none of that is being done by anyone else. So that's not an argument for me to do flow through effects instead of what we're doing. But if you're asking is it a total waste of time or is it worthwhile, is it something we'd be happy to see someone else do, well no it's definitely something we'd love to see someone else do. We think that would be really great. And obviously if any of you guys or people you know were working on this sort of thing I'd be interested in staying posted on this, I'd be interested in keeping up, maybe sharing our thoughts as it developed, and then down the line we might look at it and say wow it turned out a lot better than we expected, this has really helped us, or we might conversely look at it and say, this really didn't go very well and now we know not to waste too much time doing this stuff. All the analysis that Rob gave I think is correct, and it's a thing that has high value. Doing this thing has high value. Does it have as high value ... I think the thing we're doing is a better move for us because it has that same quality of no one else trying it, and I think there are other things in the world too that have probably higher value. But does this have high value? I think it does.
C
So the policy implications of that. My understanding is that GiveWell is not cash limited and can't hire people because of the managing overhead. Is that right?
H
Less true than it was. We're going to be writing about that. Right now money is more relevant to what we can do than it has been in a while.
C
That's because you're hiring more people ...
H
We'll be writing about this more, and maybe we should even keep this bit off the record just because we want to write about it and frame it our way and it's going to be soon. A lot of it's just good news, I think we've done better than we expected to in terms of recruiting. There's more to the idea that funding is the bottleneck than that people are the bottleneck right now, but that's just how things happen to be right now.
C
So you would be, say, more enthusiastic if, say, Rob were doing more shallow cause type analyses at CEA than doing flow through effects analysis?
H
No, I wouldn't say that. This is again where skills and aptitude and fit come in, so this is the kind of thing where ... I don't usually have super strong opinions on what other people should be doing with their time, or to the extent I do I recognize that the opinions are very much likely to be wrong and I can present them but I don't expect anything to happen. I have strong views on what we should be doing, because I think about what our options are with the different paths and where they are going to lead. This is why I've hesitated to say this flow through effects investigation is one of the best things ... it really depends on what else you could be working on and what your other options are. I think it has high value in the grand scheme of human actions, I would be happy if someone were working on it relative to most things that people can work on. I don't know that I would go from there and say that Rob or Giving What We Can should be working on one thing or another. It really depends on their skills and fit and all that. Actually, I have some intuition that Giving What We Can is better suited and maybe more interested in the flow through effects stuff instead of doing more shallows, but that's not a very strong opinion.
P
Yes, I have one more comment about current priorities for GiveWell, maybe in the context of a concrete example, I don't know how much longer you'll be here Holden?
H
Let's try and make this ten more minutes.
P
If you look at this case of funding science research, if you look at philanthropy's involvement in scientific research. One question you can ask, which is the one you've been looking at, is what are the levers in science a philanthropist can push on, how many people are trying to push on them, and soon. Even saying this is the right first thing to look at because this will affect how it is you'll think about the impact of those levers, etc.
H
Yeah.
P
It seems to me here that there's more symmetry in the situation than you're acknowledging. If we look at the other side, on the flow through effects, maybe the questions you're looking at are more like what is the role of of scientific research in the world broadly? It's not different in kind from the question of what is the role of philanthropy in science, it's not a question that requires qualitatively different reasoning. It maybe just requires slightly changing the questions you ask in conversation, having some other conversations with a different class of people. And similarly it seems it's not the case that just figuring out which levers you can push on allows you to prioritize which levers are important: it's also the case that understanding which levers are important allows you to figure out which levers you can push on. So it seems like there's really quite a lot of advantage to doing these two things in parallel, trying to understand which areas of science are really having a lot of impact, or which types of scientific change would have a large positive impact, along with which types of scientific change you can affect. You can try and estimate which areas are crowded and which have more room for more funding, but it's not clear that you can just look at an area and say well it seems like there's a lot of work in this area and conclude that it doesn't gave room for more funding. There are huge gaps in how much funding normatively ought to be in these areas. So it seems really hard to answer that question in isolation.
H
Let's be clear, none of these investigations are focused solely on room for more funding. Room for more funding is an example of something that I feel like the tractability and the slope of learning is very good and we learn a lot and we end up moving our priors a lot. But that's not the only thing we're asking about. All the shallow investigations are trying to get a sense of importance of the problem too, and the medium investigations as well. So when you talk about what we're actually doing in science, (a) just remember a lot of what I'm doing on the scientific front right now is just immersion, it's not trying to answer particular questions, it's trying to get acclimated. It's like the equivalent if you wanted to do aid in Sub-Saharan Africa to spend a lot of time in Sub-Saharan Africa or something. It's not a lot of time, right, it's not a lot of time that I'm investing, but it's enough that I feel like I'm gaining in my basic understanding of just even how to thing about this stuff. So that's a lot of that I'm doing. In terms of the questions we are asking, we're largely asking scientists what they think is the underinvested stuff from the perspective of transformative science, which I think is something that people are often better equipped to answer than to ask them to also consider the thousand year out impact of economic growth and how all that plays into each other. A lot of scientists may not think a lot about that stuff or have much to say about it, but do think about in a little more proximate way, of the things that really could change the world which are the ones that are not being invested in. Those are the kind of questions we're asking, we're not just asking people to point us to the areas of science that have no funders. That's not been our approach at any point. If you look a the causes we've done shallows on, the causes we're preliminarily writing about, we haven't just tried to find things no one is doing. Then we would end up with a bunch of things that don't matter at all. We're looking for things that could plausibly be super important and that are kind of undercrowded given that.
P
One concrete point here would be there's some chain of actions and effects that occurs, and scientists doing research at some point in the chain ... one could look farther down the line and try to understand what the broad economic effects of research in these different areas would be like if they have other positive effects maybe that aren't captured in the economic impact. That's a question that one would not just be talking to scientists in the area, one would be looking more broadly, maybe at what has happened historically from progress in this kind of area or talking to policymakers on this stuff.
H
So that is something that I'm looking at, and I've kind of done a little mini literature review on that question. I'm bucketing that, I haven't mentioned it on this call, but I'm bucketing that as the question of ROI of science. Instead of just asking what are the best scientific investments out there today asking what have we historically gotten for our investment in science as a society? This is a question where there's been enough written on it to make me feel kind of pessimistic that we're going to learn much more about it very easily, but I've tried to review what's out there and it's something that's ongoing. It think it ends up being very ambiguous. The consensus I think is that public investment in scientific research repays the investment several times over, it's an excellent investment from any kind of dollar return perspective. I've done some estimates on the cost per life saved, they're in the range where different people would call them good or not good, and in terms of broader impacts that's about as far out as you can get really useful empirical information, as far as I can see.

And then there's much more vague observations, which is just looking at all the different dimensions on which life has improved since the industrial revolution and just hand-waving and saying, well clearly scientific innovation has been a major driver and a part of them. I mean I buy into that, and I've read a fair amount on that topic, but it feels like I'm reaching the point of where ... it will take a lot of work to learn a little more about that in terms of moving priors.

And I don't mean to say that any of these questions are just worthless to work on. It think for a lot of people and for me at one point learning about the history of how much and in how many dimensions human lives have improved over time is very important and really changes the way you think about different causes relative to each other, but it just happens that you reach the point where it would take a lot of work to learn a little more, and I feel like we're at that point. And that point can come fairly early for these really big broad questions.

R
I guess we might wrap up there. As usual it's been really interesting and I look forward to seeing what people have to say about it when we post this online.
H
Cool.
R
A take-away for me is that I would really like to see someone, find someone, where I can give a concrete task in this area to, something that I'm optimistic that they might actually be able to make some progress on and make a concrete recommendation from and see how they go. Do a trial run [?] then come back and see whether any of us have shifted our views on the basis of it.
H
Sure.
R
Ok, well, looking forward to talking to you again soon, thanks everyone.
H
Good talking to you guys, thanks.
Referenced in:

Comment via: google plus, facebook, r/smartgiving

Recent posts on blogs I like:

Weird Nerd Moralism

Let's talk about Nietzsche without having read him!

via Thing of Things November 29, 2024

Developing the middle ground on polarized topics

Avoiding false dichotomies The post Developing the middle ground on polarized topics appeared first on Otherwise.

via Otherwise November 25, 2024

How to eat vegan on Icon of the Seas

Royal Caribbean has a new giant cruise ship, Icon of the Seas, which has a large selection of food options.

via Home November 21, 2024

more     (via openring)