Charities … Don’t Evaluate Your Work
This two-part series by Caroline Fiennes — Most Charities Shouldn’t Evaluate Their Work — in the Stanford Social Innovation Review left me scratching my head.
The tantalizing headline drew me in.
Then I tried to absorb the basic message of her formula:
Impact = Idea x Implementation
At best, charities are capable of what Fiennes calls ‘monitoring’ — counting what they do … the implementation part. E.g., fed school breakfasts to 20% more kids in 2012 than 2011.
But almost no charities actually have the skills for ‘evaluation’ — establishing the actual impact or efficacy of their idea or intervention. E.g., and they learned better.
Says Fiennes:
“So not only are most charities unskilled at evaluations—and we wouldn’t want them to be—but also we wouldn’t want most charities to evaluate their own work even if they could. Despite their deep understanding of their work, charities are the worst people imaginable to evaluate it because they’re the protagonists. They’re selling. They’re conflicted …
I’m not saying that charities are corrupt or evil. It’s just unreasonable—possibly foolish—to expect that people can be impartial about their own work, salaries, or reputations. As a charity CEO, I’ve seen how “impact assessment” and fundraising are co-mingled: Charities are encouraged to parade their self-generated impact data in fundraising applications. No prizes for guessing what happens to self-generated impact data that isn’t flattering.”
Now, I’ve always been in the ‘show me the results’ school of nonprofit fundraising communications. Although I’m well aware of the fallacy of treating process measures as evidence of true impact, I’m generally happy if, say, Charity:Water tells me that the last X dollars it raised financed Y community wells in Africa providing safe drinking water for Z villagers.
Could another nonprofit do the same job more efficiently? I’m happy to hear their case.
Could a wholly different intervention more cost-effectively and substantially improve the lives of those villagers? I’m open to that case as well.
I accept, intellectually, Fiennes’ concern about academic rigor in evaluating charities and the interventions they employ. I suppose the Gates Foundation wants that rigor. And, by the way, her article provides a number of excellent evaluation resources across a broad range of social issues.
But in the real world of ‘mom & pop’ donor ‘evaluation’, I’m satisfied with the results reported by, in my illustration, Charity:Water.
It beats the hell out of ‘percent of funds raised going to overhead’.
Am I being too lenient?
Tom
That one thing is better than something utterly useless and dangerous doesn’t mean it’s any good!
Well, unfortunately, funders don’t want to pay for evaluation, so most nonprofits are forced to do what we can with what we have.
Something is better than nothing. Usually. However… just telling stories about people who rec’d an intervention, while it works on an emotional level to keep donors giving, does not necessarily work in creating real, lasting, transformational impact. For that, true evaluation and measurement is required. Can most charities do that on their own? No. Can they partner with universities and hospitals to help them do the evaluation? Yes. But only when someone steps up to fund it. With the amount of resources going into band-aid solutions, wouldn’t it be great if we could figure out a way to go a bit deeper and prevent the band-aid from being needed?