We’ve preached the importance of continuous online testing and optimization. But it really hit home when that resulted in a 209% better conversion rate for one of our clients in 2011.
The first important step was identifying an area for testing where a lift in results would make a real difference. We settled on the donation experience on the website.
Once we knew what area we wanted to test and optimize, we knew we had to get three things right:
- Technology. The best test idea in the world is useless if we don’t have the platform in place to be able to serve multiple versions of a page, dynamically splitting traffic among the test panels and tracking the results.
- Methodology. After identifying an area for optimization, it takes careful planning and expertise to ensure that you are testing the right things and in the right way. In our case, we identified 15 different variables that were eventually grouped into what we call variable clusters — resulting in 3 test panels against the control (existing donation experience).
- Implementation. You can have a bullet-proof methodology and the best platforms in the world, but it’s still surprisingly easy to ruin everything with even the smallest mistake in implementation. Worst case? One flaw in implementation completely invalidates your results and you unknowingly choose a “winner” that in fact is worse than the alternatives.
And the important thing is we aren’t stopping at 209%. We know there is more to test, more to improve and so we’re embarking on our next round of optimization.
If I could impress one thing on you here — it’s that you should be consistently testing and optimizing.
If you’re a Masterworks client, talk to your MW team about how to do this — I’d be happy to help. If you work with another agency — talk to them and see if they have the expertise to help you. As I mentioned above, it’s important to make sure you are setting up the right tests, and setting them up in the right way.
I’ll leave you with some final lessons related to testing and optimization online:
- Decide whether you are going for breakthrough or incremental improvement. Either might be called for, depending on the situation, but it does affect how many variables you can reliably test at any given time.
- You can test fewer variables than you might think. It is critical to get statistical significance, and the more things you test, the more traffic you need. You might find you only have enough traffic to test one variation against the control.
- Set up a testing platform that makes rapid iteration and testing possible. If you are going to invest in testing, make sure it’s in such a way that you can continue to do testing. Remember consistent testing and optimization.
- Calculate statistical validity. So important! As I mentioned earlier — it is surprisingly easy to make a decision about a “winner” that turns out to pull your results down! And the easiest way to do this is to make a decision before you have enough data to declare a statistically valid result.
- Watch out for other potential testing errors. We don’t have time to delve into these in this post, but testing flaws include invalid testing methodology, historical effects, instrumentation effects, selection effects and sampling distortion.
- Don’t stop when you get a breakthrough — keep testing!
Do you have any interesting test results you’d like to share, or something you’ve been thinking about testing? Leave a comment and let’s chat!