A “Winning”Test
Would you consider sending the same ask to the same person in the same time period but through two different channels?
This technique is sometimes employed with emails that match the direct mail message and are timed to coincide with arrival in the physical mailbox.
We tested the same notion but with a request for feedback about the sustainer signup process. We sent this request for feedback within 24 hours of signup and did so via SMS and email. The control condition was email only.
For one client the percentage of people giving feedback jumped from 4% (email only ask) to 36% for the group being asked in SMS and email. For another, a less dramatic but still significant increase occurred –from 9% to 14%. Both tests utilized large samples so it’s unlikely that random noise is causing the effect. In both cases, about 60% of those giving feedback elected to do so via email.
HOWEVER…. this test demands and deserves more exploration. Did we increase response because of message repetition that increased mental attention? Or did providing this convenience and choice of multiple channels for response create greater compliance because respondents had psychologically more and higher quality motivation?
Knowing Why Something Happened Is Much More Valuable than Knowing What Happened
Testing can be setup to include surveys to measure need satisfaction and motivation quality across various testing scenarios. The reason to do this is because knowing why something happened delivers greater ROI than observing what happened and declaring victory. If we know why we can use the learning to improve on what happened or use that insight with other interactions or parts of the journey.
If the only learning is what happened then we may think sending the same message through three channels is a good idea. And the only thing better than 3 is 4 channels or 5…or the same message sent 4 times instead of 2 through the same two channels or any other permutation of “more is better” type thinking. More always has diminishing returns. Always.
The other test wrinkle is what if we only send SMS or email, or both, based on expressed preference at signup instead of trying to intuit that with their behavior? And what if the ask tells the respondent the message is being sent through the channel that matched their preference? The human desire for consistency and reciprocity may interact with our channel preference to supercharge response well beyond what we observed with the ‘more is better’ approach.
Where does all this leave us?
With us it leaves more questions than answers. But good questions to have and get answers on. So, take the ‘winning’ test as is and use it if you must, but you and your donors will likely be better served if it’s treated as a starting point, not an end.
Kevin
Interesting study. I had a question that popped up as I was reading. Privacy rules regarding SMS are pretty strict in some places. I’m curious to know if the same “SMS permission” existed in both the test and control groups. If all donors in the test group had given explict permission to recieve SMS messaging at some point in the relationship, that would indicate a elevated intimacy with the organization. If the control group had even a smattering of non SMS permission donors in the mix, that could very well introduce a bias. Was SMS permission level normalized across both groups?
Tom, it was controlled for across the two conditions because otherwise, the exact issue you describe confounds things. Thanks for the read and comment.