What is Important To Your Donors? Is This Even the Right Question?

February 1, 2012      Kevin Schulman, Founder, DonorVoice and DVCanvass

What is important to your donors?  What keeps them as donors?

Is the answer to these two questions the same?  Our answer is a definitive “it depends”.

What is important can be very different if asked in a vacuum or depending on the context provided.  We’d argue however, that while asking in a vacuum is bad, providing context is not much better.

First, a vacuum example.  Consider this hypothetical but very illustrative example of what many organizations (and the vendors doing the work) might do to measure importance.

Survey Question:  Non-Profit X engages in the following activities.  Please rate each based on their importance to you, with “0” being not at all important and “10” being extremely important.

  1. Providing clean water to needy families
  2. Providing shelter to needy families
  3. Providing education opportunities to children
  4. Improving the job opportunities for needy families
  5. Helping families be more self sufficient
  6. Improving the quality of life in poor communities

In survey geek language this is asking for “stated” importance.  What tends to happen is that importance is over-stated or inflated with many activities receiving very high importance scores and no valid way to discriminate among those with high ratings to truly discern what matters.  The more damning fact, proven over 35 years ago in a seminal study, is that the importance scores from this method actually HARM predictive validity on a key outcome like giving.  Said another way, factoring in importance scores with let’s say performance ratings on these same activities makes it less likely the organization can identify what to focus on that truly impacts choice.  They are better off ignoring the importance data and just using performance ratings (e.g. how well does Group X do on activity, 1, 2 etc..)

What happened? Psychologically, the respondent was not forced to make mental trade-offs and as a result, they took the path of least resistance.  This doesn’t make them bad with intent to mislead; just human.  Furthermore, the question provided no context.  Important to what?  To whether I’ll give again?  To whether I’ll recommend you to a friend?  To whether you doing well on this makes me more committed long-term?

Logically, one might simply restructure the question a bit to provide this context.

Survey Question:  Non-Profit X engages in the following activities.  Please rate each based on their importance to you when considering making a donation, with “0” being not at all important and “10” being extremely important.

Problem solved….except this mistake is just as wrong and more insidious.  It gives the illusion of providing context and therefore making the answers more useful or believable.  However, this context is unlikely to result in answers that are very different from the vacuum variety.  The reason?  This context takes a simple cognitive task (that produces garbage data) and makes it quite difficult, nearly impossible in fact.  To answer this context version faithfully the respondent needs to mentally determine a) if the activity is important and then b) if it is important in the context of a particular outcome (e.g. giving).   As a practical matter, this is a bridge too far.

Enter “derived importance”.  This should really be named ‘determinant importance’ since this approach is identifying the list of activities that affect outcomes – i.e. which ones matter to giving, doing, being more committed?  This, we’d argue, is the question most organizations want answered.

As a quick sidebar, ‘stated importance’ can be structured in a way to make it valid and force trade-offs.  This methodology may even be desirous if an organization is limited by the number of survey completes it can get or more importantly, is including a lot of activities not currently being done but that could be.

Now, back to ‘derived importance’ or identifying the activities (or messages or brand attributes) that impact outcomes.  As referenced earlier, getting performance ratings on activities engaged in by a non-profit is a good idea.  But, high performance does not translate to high importance, necessarily.  What the organization really needs to know is two-fold,

1)      What matters to impacting key donor behavior and

2)      How do we do on those dimensions?

Using statistical modeling (the model choice and details matter a lot here) one can derive the importance of activities without having to ask the respondent.  This is done by modeling performance ratings against a key outcome like giving or Commitment (see more about this framework here) to identify those that have a math based link.  This “link” simply means that if the organization improves its performance on an activity found to matter that, all other things being equal, the behavior will increase.

Another huge benefit with this approach is a much shorter survey.  Half as long in fact since we want and need importance and performance but only need to ask, in a direct fashion, for the performance data.

A few key comparative take-away’s and summary thoughts,

1)      The list of what matters (Q. 1 above) will be different when using stated importance versus derived.

2)      The list of stated importance activities (identified in some way similar to what is described in this post) will be WRONG.  In fact, the organization is better off throwing this away than trying to use it all.  This was definitively proven and demonstrated in a seminal piece of work you can find here if interested. (warning: this gets very math heavy very quick)

3)      Unfortunately, your organization is highly unlikely to know this.

4)      Buyer beware is the only maxim since it is often the case that we don’t know what we aren’t getting.