Appeals to Consequences

July 19th, 2019
ea
Jessica Taylor recently wrote a post objecting to what she describes as appeals to consequences. In general an appeal to consequences is saying that "X would have bad consequences" means that "X is false", but Jessica is using it in a broader way, to include the idea that "saying X would have bad consequences" means you should avoid saying X. Her motivating example is:
Carter: "So, this local charity, People Against Drowning Puppies (PADP), is nominally opposed to drowning puppies."

Quinn: "Of course."

Carter: "And they said they'd saved 2170 puppies last year, whereas their total spending was $1.2 million, so they estimate they save one puppy per $553."

Quinn: "Sounds about right."

Carter: "So, I actually checked with some of their former employees, and if what they say and my corresponding calculations are right, they actually only saved 138 puppies."

Quinn: "Hold it right there. Regardless of whether that's true, it's bad to say that."

Unfortunately, this is not a good example to build a post around, because Carter's statement actually has good consequences. Yes, it might lead to people donating less to this specific charity, and the charity still does some good with its money, but building a culture of caring about the actual effectiveness of organizations and truly trying to find/make the best ones is far more important than how much money any specific organization raises today. Plus if, say, ACE trusted this higher number of puppies saved and it had lead them to recommend PADP as one of their top charities, that that would mean displacing funds that could have gone to more effective animal charities. The whole Effective Altruism project is about trying to figure out how to get the biggest positive impact, and you clearly can't do this if discussing negative information about organizations is off limits.

The problem with building something on top of a bad example is that intuitions push the wrong way and tend to mislead you. Here's an attempt at an example that I think makes the tradeoffs clearer. Imagine Carter as a researcher who has run a small study on a new infant vaccine and seen elevated autism rates in the experimental group. Because there's an existing "vaccines cause autism" meme that is both wrong and harmful, Carter needs to be careful about messaging. Here are some ways this could go:

  • Carter's experiment is replicated, confirmed, and the vaccine is abandoned.

  • Carter's experiment fails to replicate, researchers look into it more, and discover that there was a problem in the initial experiment / in the replication / they need more data / etc

  • Headlines that say "scientists finally admit vaccines do cause autism", rates for unrelated vaccines fall, people die from measles.

How do we leave open the possibility of the first two outcomes while avoiding the third? Because of the potential harmful consequences of handling this poorly, Carter should be careful about how they talk about results and to who. Trying to get funding to scale up the experiment, making the FDA aware of their preliminary findings, letting other researchers know, etc, all are beneficial and have good consequences. Going to the mainstream media with a controversial sell-lots-of-papers story, by contrast, would have predictably bad consequences.

At one end of the spectrum, I think you should talk freely among your friends and colleagues, without worrying about whether what you're saying would have bad consequences if mishandled, because you have enough shared context with them and it's critically important to have places where you don't have "what would the effects of sharing this be" dragging you down. At the other end, when talking to a larger audience or in a situation with less shared context there are topics where you need to be more careful. In between, I think unless you're very well known or talking about something explosive you can probably say whatever you want in a public post as long as you make it sufficiently verbose, boring, or informal.

Comment via: facebook, hacker news

Recent posts on blogs I like:

Jealousy In Polyamory Isn't A Big Problem And I'm Tired Of Being Gaslit By Big Self-Help

The nuance is in the post, guys

via Thing of Things July 18, 2024

Trust as a bottleneck to growing teams quickly

non-trust is reasonable • trust lets collaboration scale • symptoms of trust deficit • how to proactively build trust

via benkuhn.net July 13, 2024

Coaching kids as they learn to climb

Helping kids learn to climb things that are at the edge of their ability The post Coaching kids as they learn to climb appeared first on Otherwise.

via Otherwise July 10, 2024

more     (via openring)