Top 5 Reasons Non-Profits Should Avoid Net Promoter Score (NPS)
Before launching into the Top 5 allow us a brief trip down memory lane. Remember the good old days when intent or willingness to do something like recommend a product, brand or service was a simple question including in many customer satisfaction surveys?
If the willingness to recommend question has parents (weird thought, we know) they’d be awfully proud because that simple concept and question has been transformed to a symbol (like Prince), a philosophy, even an entire ecosystem for measuring and managing the customer relationship.
What a lofty status for such a simple, singular question.
Now, permit us a bit of background on the question for those who are less familiar with the question and its newly minted status as a “system”.
The Net Promoter Score®, popularized by Fred Reichheld in his book The Ultimate Question: Driving Good Profits and True Growth, is one of the simplest loyalty measures. Customers are asked “How likely is it that you would recommend us to a friend or colleague?” and then provide a rating from 0 (“Not at all likely”) to 10 (“Extremely likely”).
The measure is called the “net promoter” score, because detractors are subtracted from promoters. Detractors are defined as respondents rating their likelihood to recommend 6 or less, with promoters only those who rated their likelihood a 9 or 10. Respondents who selected 7 or 8 are considered neutral). The NPS measure can run from 0 to (0% promoters, 100% detractors) to 100% (100% promoters, 0% detractors), with typical measures in the 30-40% range.
Enough with the digressions, on to the Top 5.
Number 5 Reason for Non-Profits to Avoid NPS
Assumes low scores are active “Detractors” of brand
In geeky research terms Reichheld and proponents of NPS have taken what is clearly a unipolar question of willingness to do something or not (i.e. will or will not recommend) and turned into a bipolar one with willingness to recommend on one end and willingness to detract on the low end. In other words, with ZERO evidence, we are to accept or believe that that those who give a low score on the “willingness to recommend” question are not ONLY not going to recommend your brand but will actively say bad things about it – hence the “Detractor” term.
From a management standpoint, if non-profits are to treat low NPS scores as mission critical, at-risk detractors it likely means putting more and wrong-headed resources against this segment; wrong headed because low scores might mean lack of active anything on the donor’s part versus active trashing of your brand. More importantly, a low score may not even be a negative sentiment, merely a donor who does not engage in recommend type behavior but who still likes/loves you.
Number Four Reason: It throws away data
Throwing away data is an odd description but in essence, that is what’s being done with the collapsing of 9’s and 10’s, 0’s-6’s and then quite literally, ignores or throws away the 7’s and 8’s. There is ample statistical and empirical evidence for this being wrong-headed with 0’s being behavior wise nothing like a 6. And this says nothing of the 7’s and 8’s who get trashed. Measurement does not come for free, don’t carelessly ignore certain responses and by extension, group donors into segments (e.g. Detractors) when there is no basis for doing so.
Number Three Reason: Is not actionable
The ‘system’ of NPS is a single question. That is it. Simplicity is an important goal but this is extreme. Employing the ‘system’ provides zero indication of why people score they way they do. There is no guidance, specific, general or otherwise on how to do root cause analysis, understand the “why” of responses and determine the specific levers under the organization’s control to impact NPS.
Number Two Reason: Not as predictive of giving as other measures
Remember the purpose of attitudinal frameworks like NPS, it is to help you increase donor loyalty by nurturing those who love you (to get greater share of wallet and actual recommend behavior) and properly identify those who don’t to, where financially worthwhile, repair what is broken and grow the relationship.
The fact is, NPS, does not do a very good job of discriminating on key behaviors like giving. Said another way, the “Promoters” are not that different from the “Detractors” when you look at how they behave. This chart, from our recent Donor Commitment study affirms what many, many others have found – NPS (the last column) is not as good as Donor Satisfaction (or our Commitment Model) at identifying differences in behavior as evidenced by the last row showing the percentage difference in giving among those “high” and “low” on the various frameworks. Perhaps the ultimate indictment of “willingness to recommend (aka Net Promoter) comes from a study by Jon Krosnick of Stanford who found willingness to recommend is not as good as satisfaction in predicting ACTUAL recommend behavior.
And the number one reason to steer clear of NPS: Fred says so.
In June 2006 Fred Reichheld, the creator of NPS wrote: The reason that so many researchers hate NPS is that so many senior line executives love it.” He continued to defend NPS by saying that while it was less accurate for predicting individual customer behavior than other measures, it was better at predicting business growth. But a few weeks later he wrote that predicting individual behavior was the basis of NPS – rather than a correlation to growth.
These recent responses to criticism are characterized by caution, caveats, and more than a bit of confusion – a long way from the grandstanding that initially accompanied NPS.
It’s been said that you can’t manage what you don’t measure. But a metric that doesn’t help you manage isn’t worth measuring. And the Net Promoter Score is a measure that doesn’t help you manage.
NPS ranges from -100% to 100%, not 0 to 100%.
Lisa,
You are correct, thanks for correcting our mistake in the post.