TEST RESULTS: Donors Care About Their Impact, Not Your Overhead

August 20, 2018      Kevin Schulman, Founder, DonorVoice and DVCanvass

A significant factor in the donor’s decision to give rests in how s/he answers the question, “how am I going to feel if I make this gift?”  So, the job of the fundraiser is to determine how those factors under an organization’s control can be most effectively presented.

One major set of issues involve those of  overhead”, “Impact” and “control”.  In this week-long series on how to best frame these issues Nick Ellinger draws on existing body of research plus new findings revealed by a study of these factors recently conducted by DonorVoice and the DMA Nonprofit Federation.  (You can view a full presentation on the DV/DMANF Study here.)

We’ll label each day’s post with the heading TEST RESULTS  followed by the subject of that day’s test.

We begin the week with the following post on the importance to the donor of  “impact” vs “overhead” .

                                                                                                                                                                                   Roger

————————————————————————————————————————————-

Overhead rates are even worse than a below-average way of evaluating charities.  Perhaps if they even rose to the standard of just “mediocre”, we could stand them being part of the mix as people evaluate nonprofits.

Credit to DNA Creative Communications

In reality, they are actively negative ways of evaluating nonprofits — to a point, the nonprofits that do the “worst” on overhead are actually better and more effective organizations.  They avoid the starvation cycle outlined in Gregory and Howard’s article and are investing enough to build the infrastructure of a vital, vibrant organization.

The problem stems from sloth:  the overhead rate of a nonprofit is easy to measure.  It’s comforting to have one easy number that tells  how good something is, even if the metric is built on a foundation of absolute horse puckey.

Thankfully, people will look at actual effectiveness over overhead ratios if given the choice between the two.  But if people are given only “overhead” by which to gauge charities, they will crawl toward this mirage and, when there is no oasis there, try to drink the sand.

So it’s vital we give folks a way to measure nonprofits that work and are truly effective.  That’s why DonorVoice and the DMA Nonprofit Federation worked together to use DonorVoice’s Pre-test Tool to assess what can be done to get donors to look away from overhead and toward more meritorious measures (read: almost any thing else).  If you’d like to see the DMA’s free webinar on the topic, it’s here.

First a bit about the pre-test tool, which you can read more about here.  We create a grid of factors we want to assess (in this case, 1) trust indicators, 2) who your gift helps, 3) how overhead is presented, 4) donor control, and 5) donor identity) and several treatments for each variable.

Then, we create a simulated communication using five different versions of each variable and ask donors which they prefer.  Much like your eye doctor asking you whether you see the eye chart more clearly lens A or lens B.  From that data, you can determine not only what version of each variable works best, but what variable is most important to get right.

This process also has the advantage of testing thousands of test variations at once.  With five rows and five variables in each row , we were able to find winners for each variable very quickly.  Using traditional testing methods, as Roger noted here, it would take 20 A/B tests per year for 125 years to get similar results.

I don’t know about you, but I don’t have that kind of time.

Part of the challenge, then, of traditional A/B testing is the pressure to show results.  As a result, the traditional approach is to test incrementally – red v blue envelope, teaser copy, a line here and there – in ways that lead nowhere in the long run.  As you’ll see, we were able to test some incremental concepts, but also some breakthrough concepts that, if implemented, have vast implications for how to raise funds.

The Test

For this test, we created a fictional cancer charity so those taking the test would have no predispositions about the brand.  (And we chose cancer because it is sufficiently widespread that you will get a mix of people who have or have had the disease, those who know someone who has, and those who have no personal experience.)   The sample was over 400 donors to other nonprofits.

Throughout a week of testing, we’ll go through a variable a day, showing what worked, what didn’t, and why.  Today, in this post,  we’ll start with donor identity today, in part because the results will be so unsurprising to frequent Agitator readers.

The lessons learned were:

  • Donors preferred an identity statement that matched their own experience. If you personally knew what it is like to have cancer or care for someone with cancer, that statement spoke to you and you liked copy that reflected that.
  • Getting an identity wrong was worse than not having an identity statement at all. For example, the statement “you haven’t experienced cancer in your life but you can imagine what it is like for those who have” polled worse than leaving this section blank.  This is because most people in the sample had had a personal experience with cancer, whether direct or indirect, so this inaccurate matching hurt results.
  • Overall results masked these differences by identity. That is, if you look at the overall results, it looks like identity hardly mattered.  But when you looked just at people who held an identity (e.g., a direct connection) getting that identity right mattered very much.  So too may it be with your results.  A test communication could look to have the same results as a control overall, but have a substantial positive impact with one identity and a negative one for another.  That’s why it’s important to test different messages with different identities.

All in all, these data support what we’ve seen in the research literature – people value their identity over effectiveness information.  As one researcher commented “they [donors] care about it [effectiveness], but not enough to sacrifice their own personal preferences when choosing a cause to support.”

We’ve thrown some shade at A/B testing here, but we should say this is now where A/B testing can come in handy: once you have a reason to believe.  That is, these tests allow you to create a meaningful hypothesis about what will happen and why.  From there, you can now put a test into practice: how will your letter/email/phone call work better when you match identity to identity language?

Tune in tomorrow when we argue donors don’t actually care how you spend your money – you won’t want to miss it.

Nick

4 responses to “TEST RESULTS: Donors Care About Their Impact, Not Your Overhead”

  1. OMG, I’m delighted. Huzzah DonorVoice and DMA Nonprofit Federation for developing Tool to assess what can be done to get donors to look away from overhead and toward more meritorious measures! Nonprofits have too long been colluding in promoting this truly destructive measure of so-called effectiveness. I’m looking forward to digging in to all this material, and think it would make a great series of board development sessions/discussions.

  2. I really hope that while on the topic of overhead you’ll touch upon the trend where organizations are now asking donors to increase their gift by 3% to cover the cost of credit card fees. We get lots of orgs asking for this feature without having tested what it’ll do to their conversion rates when they get their donors thinking about overhead of accepting credit cards rather than impact of their gift….

  3. When I worked at a national child sponsorship organization, where most of the public did not discern any real program differences among them, overhead was a critical issues in donor choice. Of course, the charity rating agencies played it up and as a result, each organization worked hard to make sure their overhead appeared to be the lowest. We lived and died by our overhead rate and trying to explain why ours was higher (including to our own board and international colleagues).
    This came into play again in the Combined Federal Campaign or state employee charitable appeals which listed overhead and lower rates were used as a weapon against smaller organizations or “competitors.”
    But what I discovered, in 22 years of working with small to medium sized organizations in local or regional markets is that the overhead debate is pretty non-existent. Most boards don’t ever ask or monitor their rate, donors almost never do, and the only times it arose was if those said organizations were part of a govt payroll campaign. I see 990s all the time where the accountant doesn’t do much of anything to attribute common costs across functional categories… e.g. rent is admin regardless of where it should have been credited.
    The one recurring and infuriating place where the issue of overhead pops up is among state elected or appointed officials and the cost of engaging fundraising solicitors. Our AFP and ACLU chapters are in a never ending battle to stop state legislation that tries to mandate up-front disclosure on “how much of your gift goes to the solicitor” That this is language has been ruled unconstitutional doesn’t matter to the state legislator. That most ethical telemarketers can’t even answer that question as they are not paid by the percentage of gifts raised. That prospecting comes with costs but over the long run what matters is how much has been raised for programs and that overhead of an organization should not be measured based on one fundraising project. This is where the focus of the education needs to be, IMHO, on the state legislators.

  4. Thanks, Claire! Michelle, we’ll touch on it tomorrow, but the sneak preview is asking for the donor to cover overhead is a statistical tie with doing nothing. In theory that means you should be able to get it in there without increasing or decreasing conversion rates.

    In practice, though, each organization is different. Gayle, to your point, not only are different organizations sensitized to overhead differently but so too are different donor bases. Those organizations that focus on their low overhead and ratings are very susceptible to fluctuations in overhead treatments, just like most won’t care whether it’s a 2x or 3x match, except for an organization who relies on them.

    So while these will provide general guides, we’d recommend everyone test or pre-test tool these results, as they vary based on donor bases.