Skip to main content
Get the latest content delivered straight to your inbox!
Please use your work email
Subscribe

Resources

Your destination for ebooks, guides, articles, and videos on marketing strategy and content experience.

How To Screw Up Your A/B Test Like A Rookie

A/B testing software has granted modern marketers superpowers.

Using tools like Unbounce to create landing pages on the fly or marketing automation suites to test CTAs across channels, marketers have turned into scientists. These tools are great because they take care of all the magic, like splitting the traffic and picking a winning variation.

But once in a while, you have to take matters into your own hands. You'll want to test something that’s out of reach of your A/B testing software, like offline marketing activities or changes inside of your product or application. There are many ways to screw up your test and you don’t want to be that person.

Here are the 5 common mistakes you want to avoid.

1. You don't know what you are optimizing for

How can you measure the success of your experiment if you don’t know what you’re optimizing for? Is your goal to increase the click-through rate or is it something else, like improving social sharing? Having a specific goal and a clear idea of what a conclusive or an inconclusive test will look like is the basis of an A/B test. Don’t start building a variation without having that goal in mind and have all the tracking in place to measure its achievements. 

2 . You’re changing more than one variable

Sure, multivariate testing is one thing. But it doesn’t involve the same processes and the same resources as a straight A/B test. If you’re looking to test only one variable, keep everything else unchanged. Every time you’re adding a few tweaks to the mix, it requires simultaneously running several variations and also a hell of a lot more traffic to determine the winning variation.

3. Your samples are too small

Speaking of traffic, another mistake that rookies make is to test with very small samples. Not only might you end up with an inconclusive test but even worse, you might pick the wrong winner and make bad decisions. But how big of a sample is big enough? It depends on the current success rate of your metric and on the minimal improvement that you would like to see. The higher those rates, the smaller your sample has to be. Use this sample size calculator and you will be covered!

4. Your groups are not random

There are numerous ways to come up with random groups but there are also many ways to do it wrong. If you are using a number that’s been assigned chronologically (like a customer ID), don’t make the mistake of splitting the group right in the middle. Older accounts will all end up in the same sample. You should use even and odd numbers instead. Same goes for names sorted in alphabetical order because they may hide demographic bias. Rely on variables that are 100% random to build your samples.

5. You’re not keeping a record of your A/B test

Whenever you run a test you need to keep track of the results, otherwise you tested in vain. Your memory is not as good as you may believe, especially when it comes to very small tweaks that you make on a daily basis. Your records must include a description of the experiment, the dates and the conclusion, if any. You will save a lot of time by not running the same A/B tests twice.

About the Author

Francois Mathieu is a Marketing Consultant and Entrepreneur based in Toronto, Canada. He is the Co-Founder of <a href="https://hojicha.co/">Hōjicha Co.</a>, a specialty tea distributor and retailer.

Profile Photo of Francois Mathieu