Acquisition: Direct Mail Testing – Part 1

December 4, 2012      Roger Craver

Tom’s post on the Obama campaign’s email testing prompts me to weigh in on one of the least understood and woefully mis-practiced skills in the direct response fundraiser’s repertoire – direct mail testing.

 

Recognize any of these symptoms in your organization or among your clients?

  • The test ideas that make it into the mail steam are almost all incremental and seldom (perhaps even never) beat the control;
  • Lots of time and money are spent coming up with test ideas, executing them and managing the logistics, only to be confronted with the same poor results year after year;
  • Truly creative or innovative ideas tend to get discarded out of fear and the risk-adverse need to stick with the mostly known;
  • Even if you do manage to test big ideas, the results are muddled at best since EVERYTHING changed, instead of one element. Consequently, while there may be many good ideas in the new package, they are forever lost amid the bad ones.

You’re not alone. Almost universally there are three, interconnected and very BIG PROBLEMS with how most nonprofit direct mail testing is done.

 1. Incrementalism to nowhere.
Denny Hatch, one of the best copywriters in the biz and Editor of the must-read Business Common Sense reminded Tom and me yesterday of the late Ed Mayer’s admonition, “Don’t test whispers.”

Ed’s advice should be tattooed somewhere on every direct response fundraiser. Why? Because, small, incremental changes (“whispers”) produce, well … incremental results usually not even worth whispering about. Whether up or down, these tiny changes hardly matter.

While it’s certainly true that small changes in the response can yield meaningful changes on the top or bottom revenue line of large-volume mailers, it’s equally true that the vast, vast majority of these tests do not beat the control.

Sadly, most testing becomes more habitual than strategic or purposeful.

2. The A/B road to infinity.
The bread and butter of testing methodology has long been the A/B split test. And while the logic is sound, it is incredibly inefficient.

In fact, even with a ridiculously over-simplified example of a direct mail package with 3 components – outer envelope, letter and reply form – and 6 choices for each component, there are 729 possible combinations. If a nonprofit does 15 tests a year it will take 48 years to test all the possibilities!

Then, if you’re into front-end or back-end premiums (or both) and want to add those to the testing plan, the possible combinations quickly go, for all intents and purposes, to infinity.

What does this mean? Does anybody believe that with a nearly infinite number of choices to make on a direct mail package, that the control, which is really hard to beat, is the proverbial needle in the haystack – the winning combination among countless possibilities?

3. Lack of wisdom in conventional wisdom.
Most organizations and their consultants will or should acknowledge that the process to determine what gets tested is anything but empirical, rigorous or efficient. More typically the process borders on the haphazard, with an abundance of caution and conventional wisdom thrown in.

It doesn’t have to be this way. Fortunately, our commercial product development brethren can point to the solution. Using a multivariate, survey-based methodology, nonprofits can pre-identify the best test ideas … those most likely to compete with and beat the control.

By taking this scientific, disciplined route, nonprofits can greatly reduce cost by NOT mailing test packages likely to perform poorly and increase net revenue by increasing volume on likely winners.

 

HOW PRE-IDENTIFICATION WORKS

The pre-identification of likely winners and losers is done in two parts:

1) First, surveying donors who are representative of those who will receive the actual mailing, showing them visuals of the direct mail package and measuring preferences using a very specific and battle-tested methodology.

2) Next, using the survey data to build a statistical model to assign a score to every single element that was evaluated.

This methodology is well established and used by large, consumer companies (e.g., Coca Cola, General Mills, Proctor & Gamble) to guide product development for many of the sodas, cereals and detergents on grocery store shelves.

I know it works in the nonprofit sector as well, because our sister company DonorVoice has successfully used it for a number of large and small nonprofits. You can see a short video of how the process works and download some case histories by clicking here.

Given the ever-worsening problems with acquisition and the long-term stakes involved in dwindling donor files, the time has come to drastically change the way direct mail testing is done.

Please share your experiences and solutions.

Roger

P.S. In Acquisition: Direct Mail Testing – Part 2, we’ll look at the Importance of Avoiding the Baby and the Bath Water.

P.P.S. As for our own subject line testing, in one day, our ‘Hey’ post from Monday has been opened by more readers than all but three posts entered in the last 30 days! Does that speak to the power of “Hey”… or do Tom and I just write lousy headlines?! More detail to follow.

2 responses to “Acquisition: Direct Mail Testing – Part 1”

  1. Kate Mathews says:

    Roger, I’ll bite. This method looks like focus group testing with storyboards. Except perhaps it is some kind of on-line process now where participants are sent a presentation electronically, or perhaps come in to a center and use a computer that tracks eye-movement and reaction time???

    TWS had success with this methodology when it developed a “cut tree” control package that revitalized an acquisition control package on below cost timber sales. TWS used storyboards and old-fashioned focus groups. Several elements were tested, but it was a new carrier design and incremental changes elseqhere that won and were mailed and bought new life for an aging control. Ann Monnig spearheaded this effort.

    But questions that your post spark are:

    How is the audience selected and qualified?

    Is this testing online or in person? Is it moderated? Is it timed?

    If it is an old-fashioned focus-group setting, how is bias in focus-group controlled for? IE, “professional” participants who in theory bias the results because they go to so many focus groups and “play the system” …OR a group where one.strong personality dominates the discussion and opinions expressed?

    And, the question that bottom-line oriented people will ask: what does it cost?

    Thanks. Kate

  2. Roger Craver says:

    Good questions, Kate.

    This patented conjoint survey process was developed by DonorVoice to enable organizations to measure the effect of literally dozens and dozens of packages and package elements at one time and to score them against each other and against the control. All without having to wait for ink to dry or the postal service to deliver.

    This is not a focus group. The process doesn’t ”care’ why someone likes or dislikes a package or package element; only that he/she prefers one over another. All the variables are scientifically exposed to an online audience of donors chosen to match the demographics and behavioral patterns of the mailing lists the organization will ultimately use.

    The process is not moderated but there are various ‘timing’ and other protocols used to guard against any gaming of the system. In test after test, and backtest after backtest the system has accurately scored and projected the winning packages and package elements.

    The cost varies, but ranges from $20,000 up depending on the complexity of the test in terms of the variety and volume of audiences involved. When you consider the amount of money –and perhaps even more importantly the amount of time and opportunity–spent on conventional,incremental testing this approach is inexpensive. As one large mailer told us: “it’s like doing one year’s testing in a week.”

    What is key here is the process of “Pre-Identifying” those packages and package elements that are likely — or not likely — to beat the control….the ability to test virtually any idea, graphic, offer, signer, premium, teaser, size (and combinations of them all) in a risk-free, empirical environment.

    If you’d like to learn more I recommend you contact Kevin Schulman over at DonorVoice (kschulman@thedonorvoice.com)

    Roger