Facts and Myths of Donor Surveys- Part 1 of 3 on Donor Surveys

February 3, 2020      Kevin Schulman, Founder, DonorVoice and DVCanvass

The only research you can trust is actual response to real  fundraising.  Surveys can reveal interesting and useful information. But they are no good at uncovering donors’ real motivations. The only way to know that is watching what they do.”  

That quote comes from a well-known blogger in the fundraising space.  What it lacks in accuracy it certainly makes up for in certitude.

We argue the exact opposite is true.  You cannot understand the ‘why’ of an individual’s action (i.e. motivation) relying only on behavior data.  Is time- on- page a good metric indicator for quality, effective content that hdld someone on the web page; or, is it evidence of someone struggling to understand what the hell the page is talking about?  Is the non-response to your appeal all bad?  Do we assume the person who gave to the appeal that you internally called “Program X” gave because of Program X or because of some other, deeper reason?  And should we assume that Program X is of greater interest to this person than program Y or Z?

No matter how much you torture behavior data –gift transactions, web activity, digital engagement, etc.–it will never tell you why someone took an action or gave money.

On the flipside, and in the defense of the anti-survey crowd, it’s likely they’ve come across some really crappy survey research.  “Likely” because that’s the version of survey research that is likely to predominate, particularly in the non-profit sector where spend on it is proportionately a drop in the bucket compared to the for-profit world.

Generally, increased spend for a give service (e.g. survey research) results in a more mature marketplace and typically, increased sophistication on the part of buyer and seller.

Unfortunately, the broad-brush approach to surveys that usually ends up getting applied is fatally flawed at best and dangerous to any charity that believes and acts on the results.

Facts and Myths

I’ve listed a few facts and myths below to set the record straight.  Then I’ll follow up with two Agitator posts that get very specific about how to ask/not ask questions.  Finally, I’ll tackle one of the most common ways this sector uses surveys to determine what donors find ‘important’ –a simple but equally the wrong way to arrive at key insights.

Facts:

 

  1. There are right and wrong ways to ask survey questions. We actually know, via data, how people read, process and answer survey questions.  Many books have been written on the subject by academics.  If the person designing your survey does not understand the psychology of survey response you will get crappy data.

 

  1. Question formulation is science, not art.  It is as science based as the statistical analysis that (should) get applied to the resultant data set.

 

  1. You cannot measure loyalty without attitudinal data.  As a corollary, you cannot measure loyalty well using just transactional data.

 

  1. You cannot determine donor motive, intent or need without attitudinal data.

 

  1. Many things we want to measure (trust, motive, need) must be done indirectly in surveys. A tell-tale sign of a methodologically flawed survey project that is trying to get at deeper constructs and causal insights is if the “analysis” is nothing more than descriptive reporting of the questions asked and responses provided.  Almost always this means there was no science applied and by extension, lousy data that prevents digging deeper into donor understanding).

 

  1. If the construct we want to measure is complex (e.g. personal connection to charity, trust in the charity) it requires more than 1 survey question. It requires indirect statements identified with rigorous, quantitative scale development methodologies. This is why the much-touted Net Promoter Score (NPS) is of little practical use.

 

 

  1. The only way to gain understanding of the value/impact of charity touchpoints – i.e. the enormous spend put against house file communications in the name of stewardship and continued giving is to build a model using attitudinal AND transactional data.

 

  1. Donor behavior is not a perfect predictor of anything. Is it an indicator? Sure.  But what is often overlooked is the opportunity for substantial improvement that comes from targeting and more importantly, understanding donor needs, motives, intent and preference that only comes from collecting the right kind of attitudinal data and analyzing it the right way.

  

  1. The analytical plan dictates survey design. If a vendor you hired looks at look you with a blank stare when you ask about the analytical plan – as they set off to “write” the survey – then cancel the project, immediately.

 

Myths:

 

  1. We can’t understand donor motivation from surveys. It is true there are right and wrong ways to get the answer.  As a result, many surveys fail to correctly measure this.  But this is an indictment of the product provider, not the product

 

  1. You can understand donor motivation, intent and needs by analyzing their behaviors. You can guess, sometimes correctly, sometimes not, but you’ll never truly know that donor’s motivation.

 

 

  1. If the survey data supports our existing opinion it is right, if it doesn’t it is wrong. This is occasionally what the often- wrong- never -in- doubt crowd resorts to when commenting on survey data.

 

  1. We can only use surveys to explore very simple topics. In fact, we can understand very complex topics like loyalty.  Doing so requires simple, well designed questions and analysis on the back end.

 

 

  1. You can’t believe what people tell you in surveys.  This is the poster-child for the extremely flawed logic of the “often wrong, never in doubt” crowd.  Because this is true when the survey design and methodology are crap does not mean it is always true.

 

  1. It’s NOT what donors say in a survey, it’s what they do that you should pay attention to.  This myth is closely tied to Myth 5.  What if the donor survey matches the donor behavior?  Is the survey answer “right” then?  What about non-responders?  Are they “saying” they don’t want to give to you?  Ever?  Is there a chance there is a segment of non-responders today who actually will give when the timing and chance better dictate it?  History says yes.

 

The next time you hear this–and you will hear it again and again –, ask yourself whether there might be lots of instances where knowing how a donor feels or thinks about you might be a better indicator of future behavior than the fact that they did not give on your last appeal.

What’s your experience with surveys?  Have you encountered some of these same myths and facts?    Please share your experience and also any questions in the Comments Section.

 

Kevin

P.S.  Next up: Part 2—Survey Question Design 101

5 responses to “Facts and Myths of Donor Surveys- Part 1 of 3 on Donor Surveys”

  1. Jonny says:

    Hi Kevin. Thank you, aniInteresting read, as always.
    A couple of questions. Are there any books you’d particularly recommend on the psychology of surveys?
    Also, do you share this sentiment for focus groups as a way as of uncovering attitudinal info?

    • Kevin Schulman says:

      Jonny,

      Thanks for the feedback. There are several books, they are all a bit academic but hey, no pain, no gain.

      -The Psychology of Survey Response, Tourangeau, Rips, Rasinksi. There are 3 parts to any (survey) question – how the person understands it, how the access their memory to retrieve information tied to the question and how they map their thinking to the answer choices provided.

      -There is a bit of a trilogy series, Asking Questions (Sudman, Bradburn), Thinking about Answers (Sudman, Bradburn, Schwarz), Answering Questions(Schwarz, Sudman)

      -As as more lay, general overview of the entire survey process – Designing and Conducting Survey Research (Rea, Parker)

      -Delving a bit into analysis – Social Statistics, Blalock.

    • Kevin Schulman says:

      Jonny,

      Focus groups. Let me first acknowledge I’ve conducted dozens and dozens of qualitative sessions and been guilty of every sin – not unlike in the quantitative sphere so I at least come by these points of view honestly.

      Qualitative research can be useful. Though I’d argue only in combination with quantitative work (done well, most isn’t). The qual piece can come before the quant (hypothesis formation, to hear the audience talk about product/issue in own words, testing the validity of survey questions themselves as pre-test etc..) or after the quant to get more understanding on a very specific quantitative finding. Or, ideally, the qual-quant-qual sandwich though that is exceedingly rare for time/cost reasons (though happens routinely in corporate world).

      The problem comes in when qualitative is used as the only source of insight. This seems to be commonplace with creative shops that look to “test” (read: validate) their own assumptions or creative concepts. There is way too much risk in assuming the qual-only insights work is not being misconstrued, tortured or otherwise contrived. This is because the ways that qualitative work can mislead are numerous (same as quantitative).

      Lousy questions, group think, lousy answers that flow from lousy questions or just asking folks to do a level of self-introspection that is cognitively impossible – e.g. any ‘why” question around their behavior. If I had a nickel for everytime I heard all of this could be overcome as long as the moderator was trained and good at their job I’d have, well, a lot of nickels.

      Rarely, if ever, is qualitative data collected from these sessions and analyzed with an ounce of rigor. For example, coding of transcripts by multiple parties and looking for coder inter-reliability. I’d wager 99% of the time in non-academic settings the qualitative is “analyzed” by a single person offering their interpretation of what they heard and cherry-picking verbatim quotes to support the inevitably biased point of view.

  2. Zach Shefska says:

    Kevin, this is great material, and I look forward to part two! I wonder (if you have a chance), you could take a look at the donor survey guide I put together (https://imarketsmart.com/donor-surveys/)? Surveys seem so simple, yet as you’ve shown, there are quite a few pitfalls you can fall into. I’d love your feedback on what I’ve produced.

  3. Jason Chmura says:

    I’ve always been curious about incentivizing surveys. I’ve heard that it can help encourage more people to take the survey. But I’ve also heard that it can encourage false answers. Sort of a “just get done to get my prize entry…” mentality.

    I’d be curious to know where incentivizing behavior falls in the fact/myth spectrum.