Is Your Quality Data 3,008 or So 2000 and Late?
The Black Eyed Peas may not dominate my playlists, but they were right about one thing. Some ideas age badly, and you tend not to notice until you’re already behind.
Retention data is one of those ideas. Not because it is useless, but because it tells you what already happened, long after you had a chance to change it. It’s your rearview mirror quality measure; clear enough, accurate enough, and completely unhelpful when you are trying to avoid what is directly ahead of you.
Quality over quantity statements tend to get a lot of head nodding but it requires grappling with the operational implication of that belief. If retention is your primary quality signal, you’re committing to learning slowly and expensively. You wait a year to discover a problem, another year to test a fix, and all the while you are scaling decisions based on incomplete information.
Look at the chart on the left. The top two bars show 12 month retention by campaign. Death Penalty significantly outperforms Child Rights. With the benefit of hindsight, the conclusion is obvious. More budget should have gone to Death Penalty and kill or fix child rights.
The bottom two bars tell a similar story for telefundraising. Fundraiser A and Fundraiser F look identical on the metric they are paid to optimize – # of sign ups. They get the same commissions too but one of them is unknowingly and quietly destroying value. Fundraiser F is aptly named if we’re grading.
But you only see this after a year has passed and you then intervene, retrain, or reallocate, and wait another year to see whether it worked. If that feels inefficient, it is because it is. You are paying real money to learn very slowly.
Now look at the chart on the right.
This is the same underlying reality, but viewed through a forward-looking DonorVoice True Quality Score. It is real-time, multifaceted, and diagnostic. In the first week of a campaign you know to shift spend to Death Penalty and intervene with Fundraiser F whose too aggressive on the close but in rearview mode, that gets rewarded.
The True Quality Score isn’t just earlier, it’s actionable. Retention obviously matters but it’s a lousy way to drive a business looking in the rearview.
Kevin



What sort of data allows you to evaluate the predictive accuracy of retention ‘while it is happening’? This must be critical is determining how to re-allocate resource using your True Quality Score.
Hi Graham, your question gets to the heart of it.
If you restrict yourself to passive 1st party data, you’re right to be skeptical. Using behavioral exhaust alone to identify “good” vs “at-risk” donors tends to get you slightly better than a coin flip. We see that too.
The step change comes when you stop treating the donor as something to be inferred and instead bring in zero-party data.
Our True Quality Score is built on passive 1st party + three classes of zero-party input.
Commitment
A direct measure of mental attachment to the organization or cause. This can be captured/asked at census level (on form) as part of interaction.
Identity
The degree to which the donor incorporates the mission into their self-concept. For example, whether they see themselves as an “activist” or not. This is a powerful separator because identity alignment is one of the strongest predictors of durability across almost every domain we’ve studied.
Experience quality
A structured measure of whether interactions meet the basic psychological requirements of autonomy, competence, and relatedness. This is partially sampled, since not everyone answers, but it does two things at once. It gives you direct signal where you have it, and it trains the interpretation of downstream first-party behavior where you don’t.
Those three together are what move the model from “slightly better than guessing” to something operationally reliable.
And importantly, this is not just a measuring stick. We use the same signals in two ways:
Individually, to tailor the next message and interaction automatically using biz rules tied to overall True Quality Score + diagnostics that make it up.
Systemically, to diagnose breakdowns in process, script, or fundraiser behavior. When a particular canvass, call script, or individual consistently produces low commitment or autonomy scores, you can intervene immediately instead of discovering the problem a year later in retention.
So the validation loop is not “wait twelve months and see.” Retention still matters. It just stops being the only moment when learning is allowed.