This morning I posted Concerns with Intentional Insights to the EA Forum. It represents the work of a lot of people, and I wanted to give some details on how it was created.
Two months ago I wrote a post, Conversation with Gleb of Intentional Insights, which started a large (500+ comment) conversation. While my post focused on a single interaction with him, in the discussion people brought up many other potential instances of dubious behavior. Effectively, we were crowdsourcing what became today's forum post. Gleb participated in the discussion, demonstrating why some concerns were erroneous but also further demonstrating the qualities I'm worried about.
Recognizing that a comment thread, especially one with a big mix of topics, is not a good venue for communicating concerns to busy people, I started to collect the evidence from the thread into a document. Other people helped, some anonymous, some listed at the end of the Concerns doc, and by the end of August it was nearly complete.
Getting it the last of the way from "nearly complete" to "complete", however, took a really long time, as it turns out EAs can be perfectionists. Plus everyone working on this doc was doing it in their spare time. At this point I'd kind of lost interest, so Carl and Gregory were doing most of the work.
I had initially hoped to create both a listing of concerns and an "open letter" that summarized the problem and recommended a course of action to the community, but we weren't able to agree on what that should be. Instead we decided we would just post the evidence doc to the EA forum, and let the community go from there.
On September 16, Carl sent the draft of the evidence doc to Gleb for comments. Gleb had several, and we updated the doc based on his concerns where we thought that was warranted. On October 15, I converted the doc to HTML to paste into the EA forum. This included collecting the images and mass-resizing them (but they're all links to their original full size) and converting the formatting to html, which was mostly a manual process. It is possible to get HTML out of docs, but it produces really messy code that and I didn't think would play well with the EA forum.
After this Carl had a bit more to add, which I ported over to the HTML doc, and then we sent the draft to Gleb for a final round of comments. Gleb was travelling and unable to comment until the 22nd, at which point Carl updated the doc in response.
Here's what the ~2000 edits on the doc look like, with days-since-start on the x-axis:
At this time we are recommending that in situations where some participants have only partial support for singular they that people continue to use gendered pronouns for people with traditional binary gender unless there are strong reasons not to.
We may attempt to backport singular they to earlier versions of English, but pronouns are a very low level system. We are concerned we may not be able to safely make this change, and will only be able to offer an upgrade route. If you are interested in assisting us with testing a backport, however, please let us know.
We will continue to monitor the deployment of versions supporting singular they, and expect to issue another deprecation notice with a firm date when they-support is sufficiently widespread.
My office does a donation matching thing each year, where employees get together in groups to sponsor charities, offering to match donations. There are signs up around the office with phrases like "double your impact", but that's misleading because most of the sponsors would donate regardless. I'm torn about whether I should sign up as a sponsor: on one hand I want to raise money for great organizations, but on the other hand I don't want to contribute to a culture of donor illusions.
Offers to sponsor specific charities in our culture are almost never counterfactually valid: the sponsor will send the same amount to the charity whether you give or not. But they're often presented in a way that incorrectly implies that you can count the impact of the sponsor's money as if it were your own. This makes me sad, and is something I would like effective altruism to fix eventually. But where does this leave us in the mean time?
I'm conflicted about what organizations should do  but I think for individuals it's generally fine to participate in existing donation matching programs. It's a way to say "this is something we care about and are willing to use our resources to support, will you join us?" Most people don't care about "counterfactual validity", they're just excited about turning the conventionally solitary activity of donating into a social one.
(If someone asks "would this money get donated otherwise", however, we should be honest and take the opportunity to talk about counterfactual impact.)
 GiveWell decided not to use donation matching to raise money for their top charities because it felt dishonest to them, and I think that was a reasonable decision for them. Pushing the message that you should give because your donation will be matched would be inconsistent with the rest of what they stand for. Other EA and EA-aligned organizations have run matching campaigns, however, because offering matching funds does bring in more donations.
People don't usually volunteer details about why they decided to do something, how they did it, or how it turned out, unless they have another goal in mind. You see people and organizations writing about cases where they've done better than expected, in the hope others will think better of them. You see writing that explains already-public cases of failure, casting it in a more positive light. You see people writing in the hope they'll be seen as an expert, to build up a reputation. Additionally, while most real decisions are made in people's heads as the output of a complicated process no one really understands, if you look at decision writeups you'll typically see something easy to follow that only contains respectable considerations and generally reflects well on the ones publishing it. If you're trying to learn from others, or evaluate them, this isn't much to go on.
Efforts to change this generally go under the banner of "transparency", and this is one of the components of the effective altruism (EA) movement, especially for EA organizations. GiveWell is especially known for putting this into practice but basically all EA organizations value transparency and prioritize it to some extent. Individual EAs do this as well; for example, as someone earning to give I keep a donations page and have posted several spending updates.
This puts the members of the EA movement in a position as consumers of transparency: people and organizations are releasing information because it benefits the broader community. This is information that they could easily keep to themselves, since as a practical matter everything is private by default and requires effort to make available. Writing a report requires taking an enormous amount of detail and deciding what to communicate, which means it's very easy through selective inclusion to hide mistakes and present yourself or your organization in an artificially posititive light. And not even intentionally! It's very easy to subconciously shy away from writing things that might make you look bad, or might reflect badly on people you on the whole think highly of. more...
There are many ways in which bash is an awkward language, and handling of arguments is certainly one of them. Here are two things you might like to do in a shell:
|Code||Apartment Price Map|