18 Months’ Worth of Testing in a Day

January 18, 2012      Roger Craver

Kevin Schulman, the CEO of DonorVoice, explains the flawed — but fixable — dilemma of conventional direct mail testing, and describes a remarkable new tool DonorVoice developed and tested. A tool designed to expand the creative process and eliminate the massive inefficiencies and severe limitations of most direct mail testing.

See a 2 minute screencast about this proven tool. If you can’t see the video below, click here.

Then read the detailed description below to find out how and why this may be an Innovator’s Tool your acquisition or housefile programs have been waiting for.

IMPORTANT NOTE:  You can also listen to our webcast from February 9th on pre-testing here (9MB), and link to our powerpoint presentation here.

Feel free to contact Kevin Schulman, CEO of DonorVoice, who designed and tested this innovation, at kschulman@thedonorvoice.com

__________________________________

 Does this sound like your organization?

  • The direct mail test ideas that actually make it into the mail stream are almost all incremental and rarely (mostly never) beat the control.
  • You spend enormous amounts of time and money coming up with test ideas, producing them and managing the logistics, only to have the same poor results year over year.
  • The more creative or innovative ideas tend to get discarded out of fear and the need to stick with the mostly known, if lousy, results of the control and the incremental changes to it.
  • Even if you do manage to test big ideas, the results are muddled at best since you changed EVERYTHING, instead of one element, and while there may be many good ideas in the creative package, they are forever lost amid the bad ones.

There are three, interconnected BIG PROBLEMS with how most nonprofit direct mail testing is done.

  1.  Incrementalism to nowhere. Incremental changes produce, well … incremental results. Whether up or down these tiny changes hardly matter. While it’s certainly true that small changes on the response can yield meaningful changes on the top or bottom revenue line, it’s equally true that the vast, vast majority of these tests do not beat the control. Sadly, most testing becomes more habitual than strategic or purposeful.
  2. The A/B road to infinity. The bread and butter of testing methodology is A/B tests. And while the logic is sound, it is incredibly inefficient. In fact, even with a ridiculously over-simplified example of a direct mail package with 3 components – outer envelope, letter and reply form – and 6 choices for each component, there are 729 possible combinations. If a nonprofit does 15 tests a year it will take 48 years to test all the possibilities! When you consider a more realistic example that also includes a front or back end premium or both, the possible combinations quickly go to, for all intents and purposes, infinity. What does this mean? Does anybody believe that with a nearly infinite number of choices to make on a direct mail package, that your control, which is really hard to beat, is the proverbial needle in the haystack – the winning combination among countless possibilities?
  3.  Lack of wisdom in conventional wisdom. Most organizations will acknowledge that the process to determine what gets tested is not empirical, rigorous or efficient. It more typically borders on the haphazard, with an abundance of caution and conventional wisdom thrown in.

It doesn’t have to be this way. Fortunately, our commercial product development brethren can point to the solution. Using a multivariate, survey based methodology, nonprofits can pre-identify the best test ideas … those most likely to compete with and beat the control.

By taking this scientific, disciplined route,  nonprofits can greatly reduce cost by NOT mailing test packages likely to perform poorly and increase net revenue by increasing volume on likely winners.

HOW IT WORKS

The pre-identification of likely winners and losers is done in two parts:

1) First, surveying donors who are representative of those who will receive the actual mailing, showing them visuals of the direct mail package and measuring preferences using a very specific and battle tested methodology.

2) Next, using the survey data to build a statistical model to assign a score to every single element that was evaluated.

This methodology is well established and used by large, consumer companies (e.g., Coca Cola, General Mills, Proctor & Gamble) to guide product development for many of the sodas, cereals and detergents on grocery store shelves.

The time has come to drastically change the way direct mail testing is done; the solution is as close as that tube of toothpaste in your bathroom.

For case examples of how this pre-test tool has worked with some nonprofits, click here.

Why It’s Important to Avoid the “Baby and Bath Water” Problem
in Direct Mail Testing

One particularly aggravating problem in most direct mail testing is throwing out the baby with the bath water. This happens when the organization mails a test package with numerous test elements – i.e., a whole bunch of stuff is changed.

The mail results for the package are a very crude measuring stick for performance, only giving thumbs up or thumbs down for the entire package, with zero guidance as to whether individual components were well received (i.e., the “baby”) even if the bath water needs to be changed.

This happens all the time. The only alternative, which as a general rule, NEVER happens, is to deconstruct the totally new package into a series of A/B tests, with each test panel only including a single change. Even if this were done, it would take forever and a day to execute. Certainly some groups may try to read the tea leaves and infer or guess — based on years of experience and past testing — about why a package did poorly and what might be salvageable. But clearly, this is a flawed process fraught with layers of personal bias.

There is a better, empirical way. For the past 30 years product developers have used a survey based methodology to identify and separate the the baby from the bath water. This process can be done in days versus months, costs a fraction of what traditional testing costs, and like a recent client told us, is “like doing 18 months of testing in a day”.  (To learn more about the methodology, click here.)

Here is a recent example of the old-fashioned way. Client X mails a totally new package – different OE, letter format, letter copy, inserts – against the control. New package performs poorly. Money is lost. Time is lost. New package is thrown out.

The better way. The New Package is never mailed, because the client pre-tested it (and hundreds of other package combinations) and determined it would not perform well, as constructed, against the control. Thousands of dollars are saved, many insights are gained that would have taken years to accumulate with conventional A/B Testing, and the cost of the pre-test is more than covered.

AND … most importantly, a new ‘baby’ or package element is potentially discovered when two components of the new package test quite well (as determined by an actual score assigned to every single element). These New Elements are now live tested in the control package and replace elements of the control that are identified as weak.

New “bath water” (i.e., poor performing test elements) are unfortunately easy to create; but “baby elements” (i.e., winning package elements) are much more difficult to identify.

Isn’t it time to start identifying the winning “babies”?

Kevin Schulman, CEO

DonorVoice

Contact: KSchulman@thedonorvoice.com

One response to “18 Months’ Worth of Testing in a Day”

  1. Mike Cowart says:

    I would like to learn more re your new way of testing & associated costs.
    Thanks

Leave a Reply

Your email address will not be published. Required fields are marked *

Leave a Reply

Your email address will not be published. Required fields are marked *