Surveying Your Donors – Tips and How (not) To’s…

January 27, 2012      Kevin Schulman, Founder, DonorVoice and DVCanvass

Up until this point, we’ve not blogged much on survey research design and best practices – a 101 or “how to” kind of post.  We are going to start now and hopefully, it is useful.  If not, speak up.

A caveat upfront; our view is that survey research, especially questionnaire design and analysis is not art but science.  This means it is not a subjective interpretation of what is and is not good design and analysis.  There are rules from the social sciences and the statistical sciences.  Violations are sometimes subtle, sometimes egregious.  The garbage in, garbage out adage is apt.  That said, it can still be non-garbage in and still garbage out if the design and sampling are ok but the analysis stinks.

We’ll likely meander from critique to pro-active tips depending on mood and available fodder. We’ll probably also name names on occasion and avoid it on others to protect the innocent (or not so innocent).

Here is one naming-names example.  This is a slide from a Dini Partners deck on Moves Management, which you can find here if you’d like.

The critique, which may border on blistering, is multi-faceted.

First off, the survey question (from Penelope Burk).  Seriously?  Let’s set aside the leading structure, by which we mean subtly suggesting there is an obvious or correct answer.  Let us also set aside the notoriously unreliable data one often gets from asking question about future intent or behavior.

Let’s suppose these three actions (or “moves” in moves management vernacular),

1)      Personal thank you

2)      Acknowledgement letter

3)      Update on program

are important to successful stewardship and cultivation.  If so, then we really need separate, discrete measures for each to understand their relative impact on giving.  In other words, the question should be 3 different items, not one.

Also, the three questions should be getting a more reliable rating, not “would you give again…” as the context.  One example is asking whether these various actions occurred – a simple yes/no on whether they experienced it or not.

Analysis can then be done using actual giving (if this was a donor sampled from the house file) or self reported giving to determine, statistically and in a derived fashion, whether any of the 3 actions impact giving and if so, to what extent.

We also need to take the reporting of the (meaningless) results to task.  If “Definitely” and “Probably” are in fact, different expressions of likelihood on giving then it makes absolutely no sense to report them as a collapsed category.  If “Definitely” is 1% and “Probably” is 92% we probably (or is it definitely?) have a different take on these findings.  It is well documented that the end-points on survey questions tend to represent well-formed opinion, solid judgments (how those end-points are labeled matters a lot however).

Conversely, the mid-points are the mushy middle with very little separated “Probably Would” and “probably Would Not”, for example.

 

2 responses to “Surveying Your Donors – Tips and How (not) To’s…”

  1. David Cumming says:

    Good article. I think your critique shows the difference between a true commitment to evidence-based decision-making, and research to confirm existing belief. The former requires ongoing rigour and resourcing, which most organisations are unwilling or unable to sustain.

    • Kevin Schulman says:

      David,

      Thanks for the comment and very much agree, often times research is done to affirm existing beliefs. Or more insidious, the mental filters we apply to the results (even if well constructed) lead to only see what we want to see. A pretty powerful human bias we all possess but destructive nevertheless.