Say there’s a big chocolate cake you bring to all your social gatherings. It’s a classic, a hit. And of course you don’t want to stray away from what seems to be a winner. But what if, dare I ask, you could make it better?
After the shock wears off (how could anything be better than your delicious masterpiece!) consider that the same could be said for the mail you send off into the world. What’s worked in the past for you—and more importantly, your recipients—may only work to some extent. And you’ll probably never know what else could work best if you don’t put your original piece through some rigorous testing.
Enter A/B testing: pitting your original “winning” piece against an altered version to see what garners a better response. You’ve heard of this before; it’s as basic as it gets. But, simple as it may seem, this experimentation requires a lot of thought and strategy and can be done in such a way that ends with misleading or uninformative results. So sit back, take a piece of your chocolate cake, and enjoy the following guide on how best to execute A/B testing—and how to avoid the pitfalls that may burn you.
Ambitious is Not Always Delicious
Let’s go back to your cake for a minute and take a slice. It’s got some nice milk chocolate frosting, blue icing along the top edges, and a springy yellow cake with some chocolate marbling going on. Just elegant. What could be wrong with it?
Well, that’s actually a pretty decent question to start with. Even though no one has explicitly told you there’s anything wrong with it, you might be able to imagine the half-eaten cake still on some people’s plates, or the guests who held up a hand and politely said “no, thank you.” Could it be the flavor of the chocolate? The color of the icing? The consistency of the cake? Or maybe it’s not even the cake, but the time of day you served it. Were they not hungry enough for it? Once you come up with a strong hypothesis to test (“If I present the people with green icing instead of blue, the response will increase”), you’ll be on the right track.
Stepping back into the mail world, you might look at a letter package and ask yourself similar questions. Is the envelope working? The headline? The offer, the paper, the color? It’s easy to get carried away and start retooling multiple factors at once to see if people respond better to something totally different. But that’s where the first major problems with your testing may arise.
What you’re looking for in A-B testing is targeted insight. Changing one major aspect and testing it against the control will allow you to know exactly what your recipients respond best to. It will be the difference between seeing they also like lemon meringue pie, and knowing that when you serve chocolate cake, they’d much prefer dark chocolate. With this insight you will not only help better develop the present control, but future packages as well.
Serving Size: 1,000 to 10,000
So after you pick the element you want to change and revise accordingly, you’ll of course need to think about sending your experiment out into the world. For some, it may be tempting to get a small sample size for the purposes of time and cost. However, if the quantity you send out is too small, your results will not be substantial enough to lend confidence to your findings.
The general rule of thumb is to test with a quantity equal to 20% of your total volume. Of course, if these numbers are unrealistic for your budget, a smaller sample size as low as 1,000 can still yield meaningful results, especially if the test is repeated over time.
Trust Your Taste Test, Not Your Taste Buds
Once you analyze the numbers, you may find the results go against your gut feelings. (Why would more people go for plain yellow cake as opposed to beautiful marbling?!) Don’t let that old adage “Trust your gut” get the best of you. If your findings run counter to your initial thoughts about what works best, throw your uninformed gut out the door and let the numbers do the talking. And if your findings do happen to reflect your instincts, don’t expect that every time.
Remember, we’re in the business of providing information in a way our audience wants to hear it, not the way we think we would want to hear it. This doesn’t mean our own feelings or intuitions are worthless; it just means our personal insights can only take us so far. This is why we test in the first place—to discover those elements that make our consumers want to continue the conversation by responding back in some way, whether it’s by signing up for a promotional offer, or partaking in a variation we mistakenly thought for sure wouldn’t win. If it’s what the people want, let them eat (plain yellow) cake!
Last Bite
It’s no accident that customers are also called “consumers.” They consume what they like and put aside what they don’t; they eat up some deals and spit out others. And it’s hard to discern what it is that appeals to them best if we aren’t constantly testing and tweaking. So we do it to see what works and what doesn’t. We do it to satisfy our curiosity. And most importantly, we do it to push for maximum response—that sweet spot of success.