Donor Surveys are Crap. Or Are They?

November 4, 2013      Kevin Schulman, Founder, DonorVoice and DVCanvass

We here it all the time from people with, as best we can tell, absolutely no background in survey research or quantitative analysis whatsoever.  Many of these people mask this lack of knowledge with an often wrong, never in doubt tonality.

On the flipside, and in their defense, they have likely come across some really crappy survey research.  That version is likely predominate, particularly in the non-profit sector where spend on it is proportionately a drop in the bucket compared to the for profit world.  The increased spend results in a more mature marketplace and typically, increased sophistication on the part of buyer and seller.

The broad brush that gets applied however, see here and here, is fatally flawed at best and dangerous to any charity that believes it.

A few facts and myths to set the record straight.

Facts:

  1. There are right and wrong ways to ask survey questionsWe actually know, via data, how people read, process and answer survey questions.  Many books have been written on the subject by academics.  If the person designing your survey does not understand the psychology of survey response you will get crappy data.
  2. Question formulation is science, not art It is as science based as the statistical analysis that (should) get applied to the resultant data set.
  3. You cannot measure loyalty without attitudinal data.   As a corollary, you cannot measure loyalty well using just transactional data.
  4. You cannot determine donor motive or intent or need without attitudinal data.
  5. Many things we want to measure (trust, motive, need) must be done indirectly in surveys.  A tell-tale sign of a methodologically flawed survey design and project is if the “analysis” is nothing more than descriptive reporting of the questions asked and responses provided.  Almost always this means there was no (social) science applied and by extension, lousy data.
  6. If the construct we want to measure is complex (e.g. personal connection to charity, trust in charity) it requires more than 1 survey question (it requires an index, which typically includes indirect statements identified with rigorous, quantitative scale development methodologies)
  7. The only way to gain understanding of the value/impact of charity touchpoints – i.e. the enormous spend put against the house file in the name of continued giving – is to build a model using attitudinal AND transactional data.
  8. Donor behavior is not a perfect predictor of anythingis it an indicator? Sure.  Is there massive room for improvement in targeting and more importantly, understanding donor needs, motives, intent and preferences by collecting the right kind of attitudinal data and analyzing it the right way?  Yep.
  9. The analytical plan dictates survey design.  If a vendor you hired looks at look you with a blank stare when you ask about the analytical plan – as they set off to “write” the survey – then cancel the project, immediately. 

Myths:

  1. We can’t understand donor motivation from surveys.   It is true there are right and wrong ways to get the answer and as result, many surveys fail to correctly measure this.  But, this is an indictment of the product provider, not the product)
  2. You can understand donor motivation, intent and needs by analyzing their behaviors.  You can guess, sometimes correctly, often not and never knowing.
  3. If the survey data supports our existing opinion it is right, if it doesn’t it is wrong.  This is occasionally what the often wrong, never in doubt crowd will do when commenting on survey data.
  4. We can only use surveys to explore very simple topics.  We can understand very complex topics like loyalty.  It does require simple, well designed questions and analysis on the back end.
  5. You can’t believe what people tell you in surveys.   This is the poster-child for the extremely flawed logic of the “often wrong, never in doubt” crowd.  That this is true when the survey design and methodology is crap does not mean it is always true.
  6. It’s NOT what donors say in a survey, it’s what they do that you should pay attention to.   This myth is closely tied to myth 5.  What if the donor survey matches the donor behavior?  Is the survey answer “right” then?  What about non-responders?  Are they “saying” they don’t want to give to you?  Ever?  Is there a chance there is a segment of non-responders today who actually will give when the timing and chance better dictate it?  History says yes.   The next time you hear this, and you will hear it again (and again), ask yourself whether there might be lots of instances where knowing how a donor feels or thinks about you might be a better indicator of future behavior than the fact that they did not give on your last appeal.