So you are a great fan of A/B testing.
Your A/B test result has come, but your team finds it inconclusive. Your conversion team is now clueless about what it is to do next. Though we all love clear results as they are expected to bring more revenue, yet inconclusive results are also a reality that one needs to accept. So learning the art of post-test analysis is also important.
The 4 tips below shall let you do the same successfully:
- Check whether winners are indeed winners
Joel Harvey, a highly reputed expert in Conversion Science, says, “Post-test analysis is sort of a misnomer. A lot of analytics happens in the initial setup and throughout full ab testing process.”
This is true to a large extent. You should verify whether winners are actually winners. You should evaluate all the core parameters such as statistical significance, p-value, test length, delta size etc. If you are sure of the result, only then you can expect a conversion boost in traffic or leads in the real world.
- Avoid testing with small segment size
In spite of following the best process, you may still encounter errors as a result of defective sample size and when the A/B testing framework is not utilized properly. Peep Laja, a conversion rate expert, says, "First of all check whether there's enough sample size and that we can trust the outcome of the test. Now see the results of the test across key segments provided that the results in the segments are valid.
- Avoid analysis paralysis
Many marketers suffer from the problem of excessive analysis, which serves no constructive purpose. Slicing the results into too many segments or different analytics tools is a mistake that you should not commit. This may result in conflicting results. Revenue should be the best metric to judge conversion rates. Additionally, it is also important to understand how the tested changes impacted every segment and the role that these segments play in a customer's journey. Experts suggest that it's very important to record results by segmenting data on the basis of industry, type of website, geographic location and other assorted parameters.
- Look for unexpected effect
Results aren’t derived in a vacuum. Any change will create ripple effects throughout a website, and some of these effects are easy to miss. You believe pop-ups drive conversions, but at what cost? What is the engagement level with the brand?
The purpose of the whole exercise is to know the answer to the following question — why did one variation win and what does it tell us about our visitors? This is a collaborative process and demands a deeper level of hypothesis and testing.