Facts and Myths of Donor Surveys

December 30, 2020      Kevin Schulman, Founder, DonorVoice and DVCanvass

[ Editor’s Note:  During what is a holiday recess for many we’ll re-run some of the most popular posts from 2020.  Leading off this parade is the first of Kevin’s 3-Part series on Donor Surveys.  You can find Part 2 here and Part 3 here. We’ll resume posting new content on Monday, January 11.  Roger]

——————————————————————————————-

 

The only research you can trust is actual response to real  fundraising.  Surveys can reveal interesting and useful information. But they are no good at uncovering donors’ real motivations. The only way to know that is watching what they do.”  

That quote comes from a well-known blogger in the fundraising space.  What it lacks in accuracy it certainly makes up for in certitude.

We argue the exact opposite is true.  You cannot understand the ‘why’ of an individual’s action (i.e. motivation) relying only on behavior data.  Is time- on- page a good metric indicator for quality, effective content that hdld someone on the web page; or, is it evidence of someone struggling to understand what the hell the page is talking about?  Is the non-response to your appeal all bad?  Do we assume the person who gave to the appeal that you internally called “Program X” gave because of Program X or because of some other, deeper reason?  And should we assume that Program X is of greater interest to this person than program Y or Z?

No matter how much you torture behavior data –gift transactions, web activity, digital engagement, etc.–it will never tell you why someone took an action or gave money.

On the flipside, and in the defense of the anti-survey crowd, it’s likely they’ve come across some really crappy survey research.  “Likely” because that’s the version of survey research that is likely to predominate, particularly in the non-profit sector where spend on it is proportionately a drop in the bucket compared to the for-profit world.

Generally, increased spend for a give service (e.g. survey research) results in a more mature marketplace and typically, increased sophistication on the part of buyer and seller.

Unfortunately, the broad-brush approach to surveys that usually ends up getting applied is fatally flawed at best and dangerous to any charity that believes and acts on the results.

Facts and Myths

I’ve listed a few facts and myths below to set the record straight.  Then I’ll follow up with two Agitator posts that get very specific about how to ask/not ask questions.  Finally, I’ll tackle one of the most common ways this sector uses surveys to determine what donors find ‘important’ –a simple but equally the wrong way to arrive at key insights.

Facts:

 

  1. There are right and wrong ways to ask survey questions. We actually know, via data, how people read, process and answer survey questions.  Many books have been written on the subject by academics.  If the person designing your survey does not understand the psychology of survey response you will get crappy data.

 

  1. Question formulation is science, not art.  It is as science based as the statistical analysis that (should) get applied to the resultant data set.

 

  1. You cannot measure loyalty without attitudinal data.  As a corollary, you cannot measure loyalty well using just transactional data.

 

  1. You cannot determine donor motive, intent or need without attitudinal data.

 

  1. Many things we want to measure (trust, motive, need) must be done indirectly in surveys. A tell-tale sign of a methodologically flawed survey project that is trying to get at deeper constructs and causal insights is if the “analysis” is nothing more than descriptive reporting of the questions asked and responses provided.  Almost always this means there was no science applied and by extension, lousy data that prevents digging deeper into donor understanding).

 

  1. If the construct we want to measure is complex (e.g. personal connection to charity, trust in the charity) it requires more than 1 survey question. It requires indirect statements identified with rigorous, quantitative scale development methodologies. This is why the much-touted Net Promoter Score (NPS) is of little practical use.

 

 

  1. The only way to gain understanding of the value/impact of charity touchpoints – i.e. the enormous spend put against house file communications in the name of stewardship and continued giving is to build a model using attitudinal AND transactional data.

 

  1. Donor behavior is not a perfect predictor of anything. Is it an indicator? Sure.  But what is often overlooked is the opportunity for substantial improvement that comes from targeting and more importantly, understanding donor needs, motives, intent and preference that only comes from collecting the right kind of attitudinal data and analyzing it the right way.

  

  1. The analytical plan dictates survey design. If a vendor you hired looks at look you with a blank stare when you ask about the analytical plan – as they set off to “write” the survey – then cancel the project, immediately.

 

Myths:

 

  1. We can’t understand donor motivation from surveys. It is true there are right and wrong ways to get the answer.  As a result, many surveys fail to correctly measure this.  But this is an indictment of the product provider, not the product

 

  1. You can understand donor motivation, intent and needs by analyzing their behaviors. You can guess, sometimes correctly, sometimes not, but you’ll never truly know that donor’s motivation.

 

 

  1. If the survey data supports our existing opinion it is right, if it doesn’t it is wrong. This is occasionally what the often- wrong- never -in- doubt crowd resorts to when commenting on survey data.

 

  1. We can only use surveys to explore very simple topics. In fact, we can understand very complex topics like loyalty.  Doing so requires simple, well designed questions and analysis on the back end.

 

 

  1. You can’t believe what people tell you in surveys.  This is the poster-child for the extremely flawed logic of the “often wrong, never in doubt” crowd.  Because this is true when the survey design and methodology are crap does not mean it is always true.

 

  1. It’s NOT what donors say in a survey, it’s what they do that you should pay attention to.  This myth is closely tied to Myth 5.  What if the donor survey matches the donor behavior?  Is the survey answer “right” then?  What about non-responders?  Are they “saying” they don’t want to give to you?  Ever?  Is there a chance there is a segment of non-responders today who actually will give when the timing and chance better dictate it?  History says yes.

 

The next time you hear this–and you will hear it again and again –, ask yourself whether there might be lots of instances where knowing how a donor feels or thinks about you might be a better indicator of future behavior than the fact that they did not give on your last appeal.

What’s your experience with surveys?  Have you encountered some of these same myths and facts?    Please share your experience and also any questions in the Comments Section.

 

Kevin

 

3 responses to “Facts and Myths of Donor Surveys”

  1. Chuck Sheketoff says:

    Repeating myths reinforces them – see https://www.dropbox.com/s/sxfs71mm7unhrtt/20070904WASHPOSTPerisistenceOfMyths.pdf?dl=0

    Just tell the facts and dispel myths with just facts

  2. Jay Werth says:

    Tried to access Part 3 in this series and received the error msg.

    “Sorry, you are not allowed to preview drafts.”

    Received the same message using three browsers’ latest versions: Microsoft Edge, Firefox, and Chrome