• Posts
  • RSS
  • ◂◂RSS
  • Contact

  • Appeals to Consequences

    July 19th, 2019
    giving  [html]
    Jessica Taylor recently wrote a post objecting to what she describes as appeals to consequences. In general an appeal to consequences is saying that "X would have bad consequences" means that "X is false", but Jessica is using it in a broader way, to include the idea that "saying X would have bad consequences" means you should avoid saying X. Her motivating example is:
    Carter: "So, this local charity, People Against Drowning Puppies (PADP), is nominally opposed to drowning puppies."

    Quinn: "Of course."

    Carter: "And they said they'd saved 2170 puppies last year, whereas their total spending was $1.2 million, so they estimate they save one puppy per $553."

    Quinn: "Sounds about right."

    Carter: "So, I actually checked with some of their former employees, and if what they say and my corresponding calculations are right, they actually only saved 138 puppies."

    Quinn: "Hold it right there. Regardless of whether that's true, it's bad to say that."

    Unfortunately, this is not a good example to build a post around, because Carter's statement actually has good consequences. Yes, it might lead to people donating less to this specific charity, and the charity still does some good with its money, but building a culture of caring about the actual effectiveness of organizations and truly trying to find/make the best ones is far more important than how much money any specific organization raises today. Plus if, say, ACE trusted this higher number of puppies saved and it had lead them to recommend PADP as one of their top charities, that that would mean displacing funds that could have gone to more effective animal charities. The whole Effective Altruism project is about trying to figure out how to get the biggest positive impact, and you clearly can't do this if discussing negative information about organizations is off limits.

    The problem with building something on top of a bad example is that intuitions push the wrong way and tend to mislead you. Here's an attempt at an example that I think makes the tradeoffs clearer. Imagine Carter as a researcher who has run a small study on a new infant vaccine and seen elevated autism rates in the experimental group. Because there's an existing "vaccines cause autism" meme that is both wrong and harmful, Carter needs to be careful about messaging. Here are some ways this could go:

    • Carter's experiment is replicated, confirmed, and the vaccine is abandoned.

    • Carter's experiment fails to replicate, researchers look into it more, and discover that there was a problem in the initial experiment / in the replication / they need more data / etc

    • Headlines that say "scientists finally admit vaccines do cause autism", rates for unrelated vaccines fall, people die from measles.

    How do we leave open the possibility of the first two outcomes while avoiding the third? Because of the potential harmful consequences of handling this poorly, Carter should be careful about how they talk about results and to who. Trying to get funding to scale up the experiment, making the FDA aware of their preliminary findings, letting other researchers know, etc, all are beneficial and have good consequences. Going to the mainstream media with a controversial sell-lots-of-papers story, by contrast, would have predictably bad consequences.

    At one end of the spectrum, I think you should talk freely among your friends and colleagues, without worrying about whether what you're saying would have bad consequences if mishandled, because you have enough shared context with them and it's critically important to have places where you don't have "what would the effects of sharing this be" dragging you down. At the other end, when talking to a larger audience or in a situation with less shared context there are topics where you need to be more careful. In between, I think unless you're very well known or talking about something explosive you can probably say whatever you want in a public post as long as you make it sufficiently verbose, boring, or informal.

    Comment via: facebook, hacker news

    Recent posts on blogs I like:

    Governance in Rich Liberal American Cities

    Matt Yglesias has a blog post called Make Blue America Great Again, about governance in rich liberal states like New York and California. He talks about various good government issues, and he pays a lot of attention specifically to TransitMatters and our …

    via Pedestrian Observations November 19, 2020

    Collections: Why Military History?

    This week, I want to talk about the discipline of military history: what it is, why it is important and how I see my own place within it. This is going to be a bit of an unusual collections post as it is less about the past itself and more about how we st…

    via A Collection of Unmitigated Pedantry November 13, 2020

    Misalignment and misuse: whose values are manifest?

    Crossposted from world spirit sock puppet. AI related disasters are often categorized as involving misaligned AI, or misuse, or accident. Where: misuse means the bad outcomes were wanted by the people involved, misalignment means the bad outcomes were wan…

    via Meteuphoric November 13, 2020

    more     (via openring)


  • Posts
  • RSS
  • ◂◂RSS
  • Contact