1:1 Success Through Measurement
When we think about success with 1:1 personalized print, we often think about data.
How much data do we have? How clean is it? How is it used? Rarely do we ask one of the most important questions: How do we measure results? If you don’t measure results, you don’t know to what extent those results are due to the campaign or to something else. You don’t know which elements of the campaign work and which don’t. If you don’t know what is most effective, you don’t know how to improve the campaign down the road. In other words, you could just be wasting your money.
This is the kind of critical intelligence that will help you refine your programs into maximum effectiveness.
What should you measure?
Start with your costs.
This means campaign development, graphic design, list acquisition, data manipulation, production, mailing—measure it all.
This is the only way to analyze your true ROI. Otherwise, you’re just guessing.
On a 1,000-piece campaign selling high-end housewares, for example, you might get an 18% response rate and an average per-order sale of $125, but by the time you add in the costs to develop the program, build and clean up your list and print and mail, you might barely break even. On the other hand, if you are a Lexus dealer, perhaps all you need to do is sell one vehicle and you’ve knocked it out of the park.
Not only is measurement necessary to gauge ROI, but it will also give you important intelligence about future campaigns. This intelligence will help you design programs and adjust incentives, not based on your gut feeling, but on real data.
Say you give respondents a chance to win a sweepstakes for $500 if they log into a Web site and fill out a survey. You know that this campaign generates a 5% response rate, with 28% of those responses converting to sales of $200 each. Now start asking questions. What happens if you increase the incentive to $2,500? Does the response rate go up? If so, does the dollar per sale increase, as well? Does it generate a 2:1 return? A 3:1 return? Or does it not affect the response rate or value per sale much at all? If you test and measure these things, you know how much an additional $2,000 investment is worth to you.
Don’t stop at one or even two tests. Continue to analyze over time. Break each campaign into multiple test groups, if necessary. For example, if you continue to increase the incentive, does the response rate continue to go up? Or does it flatten out? Does the effectiveness of the incentive change based on the audience you are targeting? Does a sweepstakes to win a free mountain bike motivate one audience, while a Nintendo Wii motivates another?