What Is Important to Your Donors? How Do You Know?- Part 3 of 3 on Donor Surveys

February 7, 2020      Kevin Schulman, Founder, DonorVoice and DVCanvass

Consider this hypothetical but quite illustrative example of what many organizations (and the vendors conducting the work) might do to measure importance.

Survey QuestionNon-Profit X engages in the following activities.  Please rate each based on their importance to you, with “0” being not at all important and “10” being extremely important.

  1. Providing clean water to needy families
  2. Providing shelter to needy families
  3. Providing education opportunities to children
  4. Improving the job opportunities for needy families
  5. Helping families be more self sufficient
  6. Improving the quality of life in poor communities

In survey geek language this is asking for “stated” importance.  What tends to happen is that importance is over-stated or inflated with many activities receiving very high importance scores and no valid way to discriminate among those with high ratings to truly discern what matters.

The more damning fact, proven over 35 years ago in a seminal study, is that the importance scores from this method actually harm predictive validity on a key outcome like giving.  Said another way, factoring in importance scores with, let’s say, performance ratings on these same activities makes it less likely the organization can identify what to focus on that truly impacts donors’ choice.  The organization is better off ignoring the importance data and just using performance ratings (e.g. how well does Group X do on activity, 1, 2 etc.)

What happened?  Psychologically, the respondent was not forced to make mental trade-offs and as a result, they took the path of least resistance.  This doesn’t make them bad with intent to mislead; just human.  Furthermore, the question provided no context.  “Important to what?  To whether I’ll give again?  To whether I’ll recommend you to a friend?  To whether you doing well on this makes me more committed long-term?

Logically, one might simply restructure the question a bit to provide this context.

Survey QuestionNon-Profit X engages in the following activities.  Please rate each based on their importance to you when considering making a donation, with “0” being not at all important and “10” being extremely important.

Problem solved…. except this mistake is just as wrong and more insidious.  It gives the illusion of providing context and therefore making the answers more useful or believable.  However, this context is unlikely to result in answers that are very different from the non-context option.  The reason?  This context takes a simple cognitive task (that produces worthless data) and makes it quite difficult for the respondent to faithfully answer– nearly impossible in fact.  To answer this version faithfully the respondent needs to mentally determine a) if the activity is important and then b) if it is important in the context of a particular outcome (e.g. giving).   As a practical matter, this is a bridge too far.

Enter the concept of “derived importance”.  This should really be named ‘determinant importance’ since this approach results in identifying the list of activities that affect outcomes – i.e. which ones matter to giving, taking action, being more committed?  This, in my experience, is the question most organizations want answered.

(As a quick sidebar, ‘stated importance’ can be structured in a way to make it valid and force trade-offs.  This methodology may even be desirous if an organization is limited by the number of survey completions it can get; or more importantly, is including a lot of activities not currently being done but that could be.  But the analysis for a properly structured ‘stated importance’ survey is pretty involved). So….

Back to ‘derived importance’ or identifying the activities (or messages or touchpoints) that impact outcomes.  As referenced earlier, getting performance ratings on activities engaged in by a non-profit is a good idea.  But high performance does not translate to high importance, necessarily. 

What the organization really needs to know is two-fold,

  • What matters that impacts key donor behavior, and;
  • How does our organization do on those dimensions?

Using statistical modeling (the model choice and details matter a lot here) one can derive the importance of activities without having to ask the respondent.  This is done by modeling the surveyed performance ratings against a key outcome like giving (actual giving from the CRM, not asked in a survey is ideal) or Commitment (see more about the commitment framework here) to identify those that have a math based link.  This “link” simply signifies that if the organization improves its performance on an activity found to matter that– all other things being equal– the behavior will increase.

Another huge benefit with this approach is a much shorter survey.  Half as long in fact since we want and need importance and performance but only need to ask, in a direct fashion via the survey, for the performance data.

A few key comparative take-aways and summary thoughts on best determining what’s important to your donors:

  • The list of what matters (Q. 1 above) will be different when using stated importance versus derived importance.
  • The list of stated importance activities (identified in some way similar to what is described in this post) will be wrong. In fact, the organization is better off throwing this away than trying to use it all.  This was definitively proven and demonstrated in a seminal piece of work you can find here if interested. (warning: this gets very math heavy very quick).
  • When it comes to contracting for donor surveys aimed at identifying what’s important to donors (or any survey for that matter) Buyer Beware is a wise maxim. It is too often the case we don’t know what we are not getting.  This point can’t be emphasized enough as evidenced by the direct asking of importance (stated) approach and its wide use by folks who you will be paying money to as outside “experts”.  The evidence is really strong and really clear, you are better off paying for this data and throwing it away then using it to make decisions.  But, your expert won’t know this and it seems perfectly reasonable on the surface that if you want to know something, just ask it.
  • You can get tremendous insight from surveys and as we’ve argued, insight around “why” that simply can’t be learned from behavior data but how you ask matters and often it requires a less direct route.

 So be sure you understand the methodology and why it’s being used.

Kevin

 

3 responses to “What Is Important to Your Donors? How Do You Know?- Part 3 of 3 on Donor Surveys”

  1. Jason says:

    Great series with lots of insight. thank you.

  2. I’m grateful for this series, but also frustrated. Some of the most valuable work being done by nonprofits today is work by small, community-based organizations. They will never have the budget to do the research and analysis you recommend. So, what do they do instead? Guess? Give up?