In our day job, we deal with a lot of data and people, organisations, and companies who want to turn this data into actionable insights. Inevitably this leads to a debate on how to run tests and prove results.
There is a right way and a wrong way to do this. This post sets out to explain how you can run tests and ensure they are statistically relevant, and what to do when your volume of data is too low to prove significance.
Or in plain English:
“Statistical significance helps quantify whether a result is likely due to chance or to some factor of interest,” Thomas Redman Data Driven: Profiting from Your Most Important Business Asset
If you are making critical changes to your website you need to know that the impact you are having is real and not just as a result of chance. A/B testing has become the mot du jour, and whilst it can be incredibly powerful it does come with some risks. I should say we are huge proponents of A/B and multivariate testing when done properly.
When running A/B tests and any other test there are two key variables that go into determining statistical significance: sample size and effect size.
Sample size refers to how large the sample for your experiment is. The larger your sample size, the more confident you can be in the result of the experiment (assuming that it is a randomised sample). If you are running tests on a website, the more traffic your site receives, the sooner you will have enough data to determine if there is a statistically significant result.
The second factor is effect size. If there is a small effect size (say a 0.1% increase in conversion rate) you will need a very large sample size to determine whether that difference is significant or just due to chance. However, if you observe a very large effect on your numbers, you will be able to validate it with a smaller sample size to a higher degree of confidence. ( Hat Tip to Optimizely’s Optimization Glossary for the great explanation)
Click the image above to use VWO A/B test calculator
So if you have a large sample size it is easier to run your experiments and get statistical significance. But what do you do when it’s only small (i.e. you have tried Evans Awesome A/B tools and you have come up with no result)
When your sample size is too small to prove significance you should not be focusing on things like A/B testing.
You should be:
- talking to your customers to find out what they want,
- focusing on the user experience and sign-up flows from a top level view rather than whether the sign up button should be blue or green,
- working out how you can acquire and convert more users cost-effectively and repeatedly,
- looking for the quick wins and then, once you have the required traffic levels needed to prove significance, get into the nitty gritty around A/B testing.
Useful Tools you should check out:
- Evans Awesome A/B tools
- VWO A/B test calculator
- Optimizely’s Optimization Glossary
- Loyalty Bay — we provide a wide range of rewards and incentive solutions that reliably increase conversion rates — full disclosure I founded the company but I do think we can help 😉.