Get accurate results from your donor surveys

June 1, 2017      Kevin Schulman, Founder, DonorVoice and DVCanvass

I’m a fan of Freakonomics books and podcasts.  While they occasionally have some howlingly bad science (e.g., most solar panels are not black and the assumptions that go into their oft-quoted drunk walking is more dangerous than drunk driving contention are profoundly odd), their contrary points of view often cause me to reassess.  That’s a useful exercise even if you end up at the same place you were.

Recently, they’ve done two podcasts that talk about how surveys can be misleading – “Are the Rich Really Less Generous Than the Poor?” and one about how we put different things into Google than they would do in public that I won’t name as it may make this NSFW.

But I would say this doesn’t mean that donor surveys are bad.  It just means that bad donor surveys are bad.  So how do you craft a donor survey that gets you the results you need?  We’ve done another good post here, but let’s start by looking at the reasons people mislead with survey responses from the podcasts:

The social desirability effect.  People say what they want other people to believe in surveys (in fact, they also say what they want to believe about themselves).  This is certainly true of donations – almost two-thirds of people overreport their giving on mail surveys.  This is due largely to self-deception and the degree of benefit people will get from overreporting.  (In other words, if you think people are bad on mail surveys, wait until you see their tax returns).

The experimental demand effect. When people know they are part of an experiment, they may behave differential from how they would in real life.  Specifically, they will try to do what they believe to be appropriate behavior.  Even a picture of human eyes can increase donations or, in a lab setting, the likelihood of reported giving. In short, we aim to please researchers (whether they are there or not).  In fact, even researchers aim to please researchers, funders, or journal editors consciously or subconsciously, which is why double-blind studies where the researcher and participant both don’t know who is getting what treatment are the gold standard for research.

Selection bias.  This happens when surveyers don’t get a random sample from the population they are trying to study.  For academic studies, this is often because they are done on undergraduates, who tend to be WEIRD (white education industrialized rich democratic) and cluster in similar economic and demographic zones.  One thing we are seeing in some more recent studies is that some things that were considered holy writ (e.g., you need to have sad faces in your fundraising) may only be true for acquisition audiences (since that’s where they were tested).

Metacognition challenges.  Asking people why they do what they do is a food’s errand.  Not only would this be biased by everything about, but people literally have no idea why they do what they do (aka metacognition – sounds more fun as a big word, no?), so they couldn’t tell you even if they wanted to.

To these challenges, I’d add filter effects, the idea that our leaders will only accept results that agree with their preconceived notions.  While it is not technically a problem with surveys, I’ve seen good surveys die this way.

All this sounds like you can’t trust people in a survey.  This is as flawed as taking them as holy writ.  As we’ve discussed, there are some concepts like loyalty and preference that you simply can’t assess any other way.

So how do you craft a survey that gives you valuable results?  A few thoughts from some of our commitment studies and pre-test tool surveys that can predict how messaging will perform before the first email is sent or the first stamp applied:

  • Select your audience very careful. When we want to assess how a direct mail audience will respond to messaging, we will select a panel of people who are 50-55+ years ago and have given to similar organizations in the past year.  While this has some selection bias, it biases the panel toward what an acquisition audience will look like.
  • But don’t worry too much about channel. We recently collected a large sample of offline donors for an international relief organization and compared them to the online donors.  There was no substantive difference.  The people who have selected in to donating to your organization are more like each other (for your purposes) than they are like their demographic counterparts.
  • Don’t tip what result is desired. With our pre-test tool surveys, we randomize different aspects of a mail piece or email to the A or B condition.  This way, there is no preferred condition that we might subtly suggest to the donor.
  • Have professionals help with your survey design. The largest set of errors come in in the design of the survey.  There are plenty of questions that might sound good but lead to bad results.  There are simple traps like leading questions (e.g., “How much do you like our organization?”) and double-barreled questions (e.g., “What do you think of the type and amount of communications you get from us?”) that most know how to avoid, but there are hundreds of things like this and even seemingly minor changes (e.g., did you include an “I don’t know” category) can have major impacts.
  • Never ask why. This doesn’t mean you can’t figure out why.  By asking about people’s ratings of various aspects of their relationship with you and their overall satisfaction, you can accurately create a model around what creates satisfaction for your donors.  In fact, if you marry this to behavioral data, you can even see the value of increasing satisfaction, loyalty, or commitment.
  • Set your hypothesis and assessment criteria first. If you can, get it in writing.  This will guard against leaders only accepting the results that bolster their claims.

We’d love to help design a survey for you – let us know how we can help.