Blue Coffee Mug: $1000
Red Coffee Mug: $1
Which would you rather buy?
I think nearly every person would — if they had to make the choice — prefer to buy the red coffee mug. Even if blue is your favorite color and blue coffee mugs are so more stylish than red ones, the 1000-fold difference in the price just doesn’t seem to cut it, and red wins out in a landslide. Now why does this matter? Because, I’d argue, the same kind of decisions are made when we donate, except most people make little effort to figure out which coffee mug is which.
In my previous essay “Giving is Hard, But There’s Help”, one of the suggestions I made was that people looked for specific signs of effectiveness within the non-profit organizations they wanted to fund, noting that donating to one organization over another could make a difference by a factor as high as 1000 — or put in simpler terms, donating $1 to the Red Foundation would be pretty much the same as donating $1000 to the Blue Initiative.
So, as I think the saying goes “donate smarter, not harder”. Or, as I wish the saying would go: “Donate both smarter and harder, because there are big problems out there and we need your help with lots of money going to the most effective places”. But how do we know which organization is the Red Foundation and which organization is the Blue Initiative when it comes to real life? Surely no mailing comes in your hands with the tagline “This is the one you should fund, because it’s 1000 times more effective”. And even if it did, you probably would want to know how you could trust it…
Here at Denison VPC we do as much work as we can to identify outstanding non-profits at the local Licking County level, and figure out who here is effective at what they do — or rather, where we can do the best to grow an organization with capacity building techniques. But, you, the individual donor, is probably not spending half a year capacity building after you donate, probably aren’t a group of twenty dedicated students, and probably aren’t limiting your work to Licking County. So I wanted to help you out — how do you, as an individual donor of limited resources, figure out which non-profits are effective?
Ask The Not-So-Tough Questions
A good way to start figuring out the effectiveness of a non-profit is to ask them directly or indirectly. Simply go to their website, or contact an organization representative and ask the following two questions:
(1) What do you do with donors’ money?
(2) What evidence do you have that your activities help people?
As far as I can tell, these questions are fair and not loaded — they’re questions that any person would want answered to make sure their donation is not squandered, and a lot less than you would ask a company in which you intend to invest. Yet, these questions are often not well answered with any level of rigor.
The surprising failure of most non-profit organizations — including the most celebrated ones — to answer these questions (or be willing to answer these questions) is what led Holden Karnofsky and Ellie Hassenfeld to leave their jobs as hedge fund managers and work full time to create GiveWell, a non-profit evaluation organization that I’ve mentioned previously.
Getting An Answer
First, before thinking about founding GiveWell at all but just thinking about where they wanted to donate some of their hedge fund earnings, they asked these questions to a few dozen different organizations. Though a small handful of organizations gave worthwhile answers, the vast majority were either results they got were either confusion (organizations not understanding why these questions would be relevant, or referencing information that was overly vague), surprise (admitting that these questions were unusual, thoughtful, and difficult, and not having the information on hand), or hostility (finding it implausible anyone would want this kind of information before donating, and asserting it was confidential).
This lead Karnofsky and Hasseneld to determine that not only were these questions not being effectively answered, but that as long as these questions went unasked by the vast majority of individual donors, there would be no incentive for non-profits to work to answer them. Thus, it seems that a non-profit at least prepared to answer these questions with detail would be one worth looking into further.
And Getting a Good One
It’s important to note that when asking these questions, Karnofsky and Hassenfeld were looking for detailed information such as what projects would be funded with future money and specific evaluations of these programs that were more than stories or rough statistics cited without telling you how they were calculated. Givewell has a decent-sized manifesto about what kind of evidence they need to see impact, but the answer pretty much boils down to systematic and representative data collection.
Holden Karnofsky says he is not against stories, but the problem with stories is, as he says, “charities share a small number of stories without being clear about how these stories were selected, which implies to me that charities select the best and most favorable stories from among the many stories they could be telling”.
Holden further elaborates:
In most areas of charity, we feel that people overfocus on “did it happen?” relative to “did it work?” People often worry about charities’ stealing their money, swallowing it up in overhead, etc., while assuming that if the charity ultimately uses the funds as it says it will, the result will be good. Yet improving lives is more complicated than charities generally make it sound[. ...]
‘Did it happen?’ is a question that can largely be answered by informal, qualitative spot-checks. That’s why we would like to see more and better qualitative evidence. By contrast, to know whether a program worked, you need to somehow compare what happened to clients with what would have happened without the program – something that is often hard to have confidence in without formal outcomes tracking and evaluation.
Randomized Controlled Trials
So where do we get the “more and better qualitative evidence” from, collected in systematic and representative ways? The best answer so far is randomized controlled trials, or social science studies where you take a random selection of places, preform an intervention in them, while leaving the non-selected places alone to serve as a control. In “Getting Smart on Aid”, Nicholas Kristof writes about this approach:
Now we reach a central question for our age: How can we most effectively break cycles of poverty? For decades, we had answers that were mostly anecdotal or hot air. But, increasingly, we are now seeing economists provide answers that are rigorously field-tested, akin to the way drugs are tested in randomized controlled trials, yielding results that are particularly credible and persuasive.
Prof. Michael Kremer, a Harvard economist, helped pioneer randomized trials in antipoverty work. In the 1990s, Kremer began studying how to improve education in Africa, trying different approaches in randomly selected batches of schools.
One intervention he tried was deworming kids — and bingo! In much of the developing world, most kids have intestinal worms, leaving them sick, anemic and more likely to miss school. Deworming is very cheap (a pill costing a few pennies), and, in the experiment he did with Edward Miguel, it resulted in 25 percent less absenteeism. Even years later, the kids who had been randomly chosen to be dewormed were earning more money than other kids.
Kremer estimates that the cost of keeping a kid in school for an additional year by building schools or by subsidizing school uniforms is more than $100, while by deworming kids, the cost drops to $3.50. (In a pinch, kids can usually go to “school” in a church or mosque without a uniform.)
While it’s probably no where near this simple, we can simplify it a bit anyway — this study helps us see the dilemma in the light of the coffee mugs we saw earlier:
Increased school attendance: $3.50
Increased school attendance: $100.00
Which would you rather fund? Luckily, it’s the randomized controlled trials that are helping us get closer to these numbers. While one study still isn’t enough to say that these numbers are concrete, they’re enough to give us some confidence that we’re heading in the right direction, and they lead us to consider possible solutions that we might never have thought of because they sounded too implausible.
Why Aren’t There More Studies?
So if randomized controlled studies are the hot new thing to determine if a non-profit is effective, why aren’t there a lot more of them? Why are non-profits currently so ill-equipped to provide any good evidence of their effectiveness, let alone a randomized controlled trial hot off the presses?
The answer is that the randomized controlled trials themselves are expensive — sometimes they even cost as much as the study itself! And it turns out that funders just aren’t willing to pay that kind of money. As Holden Karnofsky writes:
Children’s Aid Society has explicitly told us they’re concerned about funders’ reactions to the amount of their budget that is going to evaluation, “as opposed to” helping people – and that they’ve been unable to execute a major community school evaluation they’ve mapped out because it exceeds the “evaluation budget” designated in their grants. We asked New Visions for Public Schools why they don’t seem to have had the same problem, and they told us that they have, but that they’ve made a priority of fighting for larger evaluation budgets from their own funders. Over and over again, when we ask charities why they haven’t measured things that seem measurable, they’ve responded that the people who fund them don’t want it: many of their funds are often officially earmarked for non-evaluation purposes, and even when they’re not, they’re concerned about donors’ wishes to “get the money right to the people who need it.”
You and I are lucky that the people who build our bridges, cars, and airplanes don’t decide they can get more money directly to us if they cut their testing budget. The people served by nonprofits should be so lucky.
It’s again important to note that just because an organization is not yet proven to be ineffective doesn’t mean they are ineffective. And also just because an organization is ineffective doesn’t make them worthless. But at least when it comes to donating my hard-earned money, no hard feelings, but I want to donate to the sure bets rather than the organizations that keep me guessing.
Unfortunately for us, there aren’t nearly as many sure bets as we would like, but there are a few. Randomized controlled trials have found some quite promising opportunities in the developing world — such as the aforementioned deworming, distribution of condoms for HIV prevention, or insecticide-treated bed nets for malaria.
We may not quite know where the red coffee mug deals are in aid, but recent developments in evaluation makes our bets more sure. And when people’s lives are on the line, I think the sure bet is one we should really consider taking. When I’m looking for the surer bet, we should try to get our hands on as much systematic and representative data as we can, and I invite you to do the same.
Author’s Note: This essay was originally posted on the new The Denison Venture Philanthropy Club Blog, a blog dedicated to discussing articles and ideas related to philanthropy and social change. The blog updates Monday, Wednesday, and Fridays, and I write on Fridays. I’ll be hosting the Friday article up here on Mondays if I think it’s good enough and deserving of a larger audience.
I now blog at EverydayUtilitarian.com. I hope you'll join me at my new blog! This page has been left as an archive.