• Posts
  • RSS
  • ◂◂RSS
  • Contact

  • Thoughts on Existential Risk

    March 11th, 2018
    giving  [html]
    I recently talked to someone who wanted to know what I thought about existential risk (x-risk). The idea is, how do we keep humanity from being destroyed, or otherwise failing to meet its potential? Within the EA movement there's been some shift in focus from global poverty towards x-risk over the past few years, and as someone who's been advocating for and funding poverty-related organizations they thought I might have some thoughts.

    Here's more or less what I said:

    • Future people matter, and we owe it to our children's children to keep from wiping ourselves out. As future people who aren't around yet to advocate for themselves, most decisions today aren't fair to them. A big contribution to global inequality is that the worst suffering is far away from the richest people, and another is that the people involved are less in a position to advocate for themselves: both of these apply even more strongly to future generations.

    • There could potentially be a lot of future people. The scale here is much larger than anything involving the people currently alive on earth.

    • I'm much more skeptical than most people I talk to, even most people in EA, about our ability to make progress without good feedback. This is where I think the argument for x-risk is weakest: how can we know if what we're doing is helping, and how can we do as much good as we can without that? This has been my main concern since I first started thinking about x-risk

    • Cause prioritization, evaluation, and thinking through potential future scenarios are all really hard, and I expect they benefit both from focused study and conversations with people who've similarly put in a lot of thought. As an EA who's gone into earning to give, however, it does make me feel somewhat disconnected. I could say "seems like community consensus is to prioritize x-risk more, despite the reasons that it doesn't seem tractable to me" but that's not how I work, and it's not really EA. I feel pretty conflicted about this, and it's an additional reason to be wary of earning to give.

    • Overall, I've been feeling unhappy about my giving since at least 2016. I think our 2017 donations were valuable, but flexibility is one of the main advantages of earning to give, and it's not one I'm making much use of. Good giving takes time, research, and investigation, and right now I'm not really able to put in that work.

    I'm unsure enough that if I don't sort this out this year I might shift giving from GiveWell's top charities to a donor-advised fund. Though if I don't sort this out this year I'm not all that convinced that I'll know better next year either.

    (I'm not considering ways of doing good things with my career other than earning to give currently; all of those options seem like they would involve moving or working remotely, neither of which I want to do.)

    Comment via: google plus, facebook

    Recent posts on blogs I like:

    Governance in Rich Liberal American Cities

    Matt Yglesias has a blog post called Make Blue America Great Again, about governance in rich liberal states like New York and California. He talks about various good government issues, and he pays a lot of attention specifically to TransitMatters and our …

    via Pedestrian Observations November 19, 2020

    Collections: Why Military History?

    This week, I want to talk about the discipline of military history: what it is, why it is important and how I see my own place within it. This is going to be a bit of an unusual collections post as it is less about the past itself and more about how we st…

    via A Collection of Unmitigated Pedantry November 13, 2020

    Misalignment and misuse: whose values are manifest?

    Crossposted from world spirit sock puppet. AI related disasters are often categorized as involving misaligned AI, or misuse, or accident. Where: misuse means the bad outcomes were wanted by the people involved, misalignment means the bad outcomes were wan…

    via Meteuphoric November 13, 2020

    more     (via openring)


  • Posts
  • RSS
  • ◂◂RSS
  • Contact