The conversion rate of your website is falling in spite of you trying everything to check its downward spiral. You are clueless. Split testing could be the culprit.
But it may be true. Yes, split testing — that promises to boost revenue and increase conversion, but that often fails to deliver in the real battleground. Though the idea of split testing in itself is wonderful since it allows you to know your customers’ preferences and provide the best possible user experience. Most split tests, however, fail to give the whole picture, don't tell what's causing the problem and how you can fix it.
The myth of split testing
The basic ideas behind split testing are simple — create two or more versions of the same page and divide the traffic between the two versions. But this is where the flaws of the system are exposed. It has often been found that two split testing tools give starkly different results with the same set of data. Sherice Jacob, a well-known conversion specialist, writes on the Kissmetrics blog: “This illustrates a major issue in split testing – which is that you can predict large, grandiose upticks in performance all you want – but if those don’t translate into actionable steps that have a direct effect on revenues, they might as well be useless."
Many conversion executives like the one-tailed test, which is a big mistake. The one-tailed test looks for positive correlations only and often gives big wins, which sounds great for a site and also ensures the job security of the conversion specialist. But there is a problem; it ignores many statistically significant factors. This makes the choice flawed.
Why two tailed tests are better
Two-tailed test, on the other hand, takes into account both the positives and negatives of a testing process. Certainly, you need more data to reach the final analysis, but it works effectively in the real world. This does not mean that two-tailed test is the only way to go, but it certainly provides better results than one-tailed test.
Importance of statistical significance
The problem with most split tests is that they start to naturally tilt towards winners even before the data collection is complete. In case a split test chooses a winner with 95% statistical significance, your audience of course will take note of this. But to sustain this result, the test will require a large traffic sample.
What is the way out?
You must be disappointed so far after reading all this. Thankfully, there are certain ways in which split tests can work for you and provide you with genuine results. You should use a popular split testing platform that integrates well with your split testing tools. Then create a new report, select an experiment, and run the test.
A lot of split tests promising a big lift in conversion can actually be illusory. But running the test for a long time can help you get better results. So, understanding the limitations of split tests is important as it also skews the result that you get in the end. Avoiding certain pitfalls such as selecting one-tailed test or running the test on smaller samples will help you get the desired outcome.