{"items": [{"author": "Haydn", "source_link": "https://www.facebook.com/jefftk/posts/884459798352?comment_id=884460666612", "anchor": "fb-884460666612", "service": "fb", "text": "You might be interested in: http://sethbaum.com/ac/fc_Reconciliation.html", "timestamp": "1499094325"}, {"author": "Randy", "source_link": "https://plus.google.com/102251509192760989541", "anchor": "gp-1499095881101", "service": "gp", "text": "You left out a fairly basic one that I think applies to a lot of different types of \"Allocate resources to low-probability preventing catastrophic risks?\" questions: What metrics to use for making tradeoffs.  Generally in these spaces there's agreement that X is very unlikely but would be very bad if it happened.  But there's very little agreement about precisely what those numbers (probabilities, costs) are, and trade off evaluations end up depending heavily on very small differences.  \n<br>\n<br>\nSo people end up making the tradeoff by gut,  with very little ability to make a rigorous argument to someone else with a different gut belief.  So you get different camps, with different opinions.\n<br>\n<br>\nWe spent a session in the CatRisk working group, having read several papers on this, working on frameworks for evaluating this type of risk, and got nowhere.  I'm not sure it's a solvable problem.", "timestamp": 1499095881}, {"author": "Alexander", "source_link": "https://www.facebook.com/jefftk/posts/884459798352?comment_id=884482293272", "anchor": "fb-884482293272", "service": "fb", "text": "Love this. You also seem to be the perfect candidate to do this.<br><br>A wild guess: the disagreement has not as much to do with the arguments and evidence pertaining the topic itself but rather that the different camps tend to use different decision making approaches. <br><br>Very roughly: the people who believe into superintelligent risks are in the \"shut up and multiply\" camp whereas the people who are skeptical are in the \"I'd sooner question my grasp of 'rationality' than to take this seriously\" camp.", "timestamp": "1499099800"}, {"author": "Michael", "source_link": "https://www.facebook.com/jefftk/posts/884459798352?comment_id=884482293272&reply_comment_id=884483321212", "anchor": "fb-884482293272_884483321212", "service": "fb", "text": "&rarr;&nbsp;If I'm understanding correctly, \"shut up and multiply\" means multiplying the low probability of catastrophic risk and the high utility impact of a catastrophe? If so, then, yes, I think this debate has made me rethink the foundations of rationality. Not in a motivated reasoning sort of way, though---it's not like I've decided on the conclusion and am changing my rules of reason to support that conclusion. But, I do think there's something very problematic about multiplying arbitrarily large and arbitrarily small numbers together and then basing your behavior on what results.", "timestamp": "1499100251"}, {"author": "Alexander", "source_link": "https://www.facebook.com/jefftk/posts/884459798352?comment_id=884482293272&reply_comment_id=884486365112", "anchor": "fb-884482293272_884486365112", "service": "fb", "text": "&rarr;&nbsp;Michael The major conclusions I drew from years of exposure to ideas floating around within the rationality and effective altruism community is that (1) their philosophically foundations are very likely broken[1] and (2) even if they are not broken, wholly embracing their idea of rationality is very unhealthy for many human beings.<br><br>\"If I'm understanding correctly, \"shut up and multiply\" means multiplying the low probability of catastrophic risk and the high utility impact of a catastrophe?\"<br><br>Low probabilities are not the major problem here.[2] Asteroid impacts have a low probability and high utility impact. Whereas many advocates would claim that superintelligence risk is not low probability. But the major difference is that the former is based on empirical evidence while the latter is mostly just armchair theorizing. <br><br>If you weight these risks by their probability and expected impact, you might conclude superintelligence to be the most important risk to worry about. But that's like claiming that the statements \"there is a 50% probability of aliens existing within our light cone\" and \"there is a 50% probability of a fair coin coming up heads\" are the same. They are not and you have to account for that. <br><br>After hundreds of studies and meta-studies we can't even decide whether Vitamin D supplementation is worthwhile.[3] Yet somehow people believe that they can decide that giving money to mitigate superintelligence risks is maximizing \"doing good\"?<br><br>Humans are spectacularly bad at making such inferences, even when given a huge amount of empirical evidence. Which means that even if it is theoretically correct to \"shut up and multiply,\" it is practically irrational. <br><br>[1] http://lesswrong.com/.../pascals_muggle.../8xhl<br>[2] http://kruel.co/.../04/pascals-wager-better-safe-than-sorry/<br>[3] http://slatestarcodex.com/.../beware-mass-produced.../", "timestamp": "1499101591"}, {"author": "Paul", "source_link": "https://www.facebook.com/jefftk/posts/884459798352?comment_id=884482293272&reply_comment_id=884492991832", "anchor": "fb-884482293272_884492991832", "service": "fb", "text": "&rarr;&nbsp;Multiple times, Eliezer and others in this area have very explicitly stated that their argument is not one of multiplying low probability by high utility impact. It's sad that this misunderstanding persists.", "timestamp": "1499104215"}, {"author": "Alexander", "source_link": "https://www.facebook.com/jefftk/posts/884459798352?comment_id=884482293272&reply_comment_id=884493086642", "anchor": "fb-884482293272_884493086642", "service": "fb", "text": "&rarr;&nbsp;Paul That's what I wrote, \"advocates would claim that superintelligence risk is not low probability.\" And it is irrelevant.", "timestamp": "1499104282"}, {"author": "Alexander", "source_link": "https://www.facebook.com/jefftk/posts/884459798352?comment_id=884482293272&reply_comment_id=884508570612", "anchor": "fb-884482293272_884508570612", "service": "fb", "text": "&rarr;&nbsp;What I meant to say is that some people believe that the foundations of rationality are not resilient enough to work well on what they consider to be edge-cases. This camp will rather doubt their grasp of rationality than to take certain conclusions seriously. The other camp will bite the bullet. Each camp thinks that the other camp is \"irrational.\" And this is probably the real underlying disagreement.", "timestamp": "1499108667"}, {"author": "Nick", "source_link": "https://plus.google.com/106589318875299120663", "anchor": "gp-1499129325076", "service": "gp", "text": "http://modelingtheworld.benjaminrosshoffman.com/my-new-project-model-the-world", "timestamp": 1499129325}, {"author": "Abe", "source_link": "https://www.facebook.com/jefftk/posts/884459798352?comment_id=885882427392", "anchor": "fb-885882427392", "service": "fb", "text": "I heard Cari Tuna speak a few month ago &amp; of the 3 things her Charity works on- 1 was the Risk of Super Intelligence. It was the first I heard this outside of sci-fi books. U might know her through Give Well &amp; Good ventures.", "timestamp": "1499581469"}]}