Designers and developers sometimes have a difficult job deciding what's the optimal way for users to carry out desired actions on our website. Wouldn't it be great if we could test all potential solutions to a problem and see which one comes out on top?
A/B testing, also called split-testing, allows us to do just that. It enables us to run real-time experiments between multiple versions to check which one is the best based on statistics.
When talking about websites, that usually means we want to have a page that allows us to display different versions of the content. This gives us the opportunity to track the percentage of visitors who performed a desired action (also called the "conversion rate").
The newsletter signup example
For example, let's say we want to increase the number of people signing up for our newsletter. To achieve this, we decide to implement a sexy new signup form which we would like to position at the end of every article on our blog. Our designers make two versions of the form and we would like to see which one is more effective, i.e. is more likely to get people to click the "Sign up" button.
The traditional way of doing this would be to:
- publish the first version of the form
- track the conversion rate for a month or two
- publish the second version of the form
- track the conversion rate in the following period
- finally compare the data for two forms and see which comes out on top
This would consume a lot of time and, even if there was a significant difference between the two versions, it could be due to some other changes on our web page. For example, after switching to the second form, we could publish a very popular blog post which could have a big impact on the conversion data.
A/B testing in action
But what if we could do this testing side by side? If we could come up with a system that automatically and randomly makes 50% of the visitors see the old signup form, and the other half the shiny new one. This is exactly what A/B testing is all about.
We used the newsletter signup form example to see split-testing in action. Our designers suggested two versions of the subscription form which is shown at the bottom of every blog post:
After the user visits the page, a cookie is stored. This is important because it makes the whole system invisible from the user's perspective. After every page refresh, the user should see the same version of the form (unless they delete their cookies). At the bottom of this blog post, you can check out which version you're able to see.
Analyzing the collected data
In the report, we can see the number of participants and the number of subscriptions to our newsletter for each version. That's all the data we need to calculate statistically significant results. We can see from the results that version A is 21% better in performance, so it seems reasonable to stick with that solution.
Tools for doing A/B testing
Both solutions allow us to define an unlimited number of experiments and versions for each experiment and provide us with detailed reports.
The main difference between these two approaches is that web services are designed to be used by people who don't know how to code. Web services also give us more control over data interpretation, results filtering, etc.
Developer tools, on the other hand, offer more control in experiment implementation. Tools are more customizable. For example, we can define custom conditions when to show each version and when to start tracking.
We are all aware of the value of customer insight. A/B testing allows us to avoid annoying surveys and questionnaires about our users' preferences and focus on gathering and analysis of empirical data instead.
While simple A/B testing methods have some difficulties mentioned before, there are tools that solve all of these problems and provide us with a reliable method to improve our business or the business of our clients.
We've been using A/B testing to improve the business of our clients. If you're interested in how we can help you - contact us.