|May 29th, 2012|
|metacharities, giving [html]|
Charity Navigator's financial-only rating system took simple inputs relying on data from charities' publicly available IRS Form 990s. This allowed them to quickly become very comprehensive, with review pages for charities up as fast as they could get the forms into their system. This won them the number one spot in our heads for "charity evaluator" but at a big cost: they had to claim their reviews based on the small amount of financial information that US charities are required to disclose were sufficient, and they set a standard of having reccomendations on most US charities.
They've expanded their rating system to include other information from the form 990 data as well as a quick review of the charity's website, which they call accountability and transparency , and they have plans to start evaluating charities based on their results, where "the results dimension will count for the largest portion of the total rating score once it goes live". 
The problem is that a real review of a charity is a lot of work. You need to spend a lot of time talking to them to gather data, evaluating it, going back for missing pieces, and presenting your findings in a clear manner where they can be widely understood. This is hard. There's no way that Charity Navigator can do this level of evaluation across the board unless either they become enormous or charities become convinced to post high quality data publicly with little prompting. And most charities don't even have this data, so do you just rate them badly? 
Charity Navigator is in a tight spot, where their brand is based on being big and comprehensive but doing charity evaluation right is too resource intensive for this scale. I don't know how to fix this, but it has to include more than just evaluating form 990s and quick looks over websites.
 I think it's great that they publish their process details publicly.
 I wrote to them asking about the status of results evaluation, but they didn't get back to me.
 GiveWell used to give "0 stars" to charities that couldn't demonstrate high levels of impact with good evidence. In some ways this made sense: if you don't know how well what you're doing is working, you're probably not doing very well. But mostly it annoyed charities to little gain, so now they just list charities they looked at but that don't meet their standards under "all charities considered".