This post is my response to the article: 11 Worst A/B Testing Mistakes According to Experts published by Usability Tools via Medium.
The benefits or purpose of A/B testing I’ve always found to be questionable. Even when you have been able to provide the right kind of hypothesis for running it, you’re failing to consider so many other variables. The same person could have different opinions on that thing based on what they ate for breakfast [that day] let alone connection speeds or any of the other many factors that may be within your control to create that engaging converting experience.
The other issue is that of volume required. To have enough data to make a sensible decision does rely on your having a site that is generating huge traffic volumes, otherwise you just have an 8 out of 10 cats scenario and that is not good enough. Think of the number of times you have hit Amazon to find the index has radically changed, or the many nuanced tests being run daily on eBay, Facebook, or countless retailers. They’re doing them to make decisions based on big datasets. But you can’t just rely on the concept of ‘we made a button green — it had 20% more clicks than the red one’. it’s not as simple as that, you have to think about entrance paths, exits, other things viewed within the session. There’s really no such thing as a simple A/B test; a simple one is one done with little thought to creating good research.
It is a great way for a lot of UX Designers to cream some cash every month, and I see it all the time; those who offer to do an A/B for a few thousand a month just to tell you whether or not their idea was worth paying them for in the first place. I say – be right in your primary decision, and base it on research you’ve done to make that conclusion. Do personal user testing before you build and get the same results with smaller numbers, launch it and if it doesn’t work change it.
A/B Testing is designed to make marginal gains, and unless your marginal gain equates to £100,000s it is just not worth it.