A/B Testing for B2B Marketers
In this article we explore A/B testing for B2B marketers and why you might be wasting your time, or worse, believing the wrong outcomes.
What is A/B Testing?
A/B testing, also sometimes referred to as split testing or bucket testing, is a research methodology used to determine which of two variants is more likely to produce the optimal outcome.
In most cases the A version is the control (or often the live) variant and the B version is the test or “new” variant.
Traditionally A/B testing is a very effective way for marketers to test and optimise all sorts of outcomes, including copy, conversion and usability features or changes.
Quite simply, you put both variants live, split the traffic between them, and the version that performs best wins.
Why it doesn’t add up
The methodology is very compelling for marketers and we love the idea as much as anyone, but the hard truth is that more often than not, it’s simply not workable for B2B marketers.
The reason lies in the maths around statistical significance.
We’ll dig into the details of the mathematics behind this challenge in a future blog article (sign up for our newsletter here if you don’t want to miss it).
But here is a simple illustration of the problem:
If you had a campaign where the A variant resulted in 1200 conversions and the B variant resulted in 800 conversions, you would choose A the version as the winner. You could also implement that version with a high level of confidence that you are doing the right thing for the success of your marketing activity.
But let’s consider another scenario.
If your A version had three conversions and your B version had two conversions, you would have very low confidence in the outcome of the test.
Mathematically speaking, the ratio of the two scenarios is the same! A has performed 50% better than B in both tests. But I think we’d all agree that a 3 vs 2 “victory” doesn’t prove anything.
For your A/B testing to be effective, you need a reasonable large number of conversions to get a meaningful result—otherwise you may as well just toss a coin.
You can calculate the exact numbers with a bit of statistical knowledge, but a good rule of thumb is a minimum of 100 conversions for a meaningful outcome.
How much longer?
So, for high value low volume B2B organisations the reality is that it could take three to four months or even longer to achieve that meaningful threshold of conversions.
And during that time that particular marketing activity is locked down, because otherwise it would invalidate the test. So you sacrifice your agility while the test slowly grinds on.
Worse, if the world changes around you during this months-long testing period, including anything from competitor price changes, feature rollouts, new opportunities or significant economic changes could invalidate the whole test. Maybe the A version was the winner BEFORE your competitor rolled out that new feature last month, but now the B version is stronger. Your test is wrecked.
Sadly, the numbers are working against you as high value, low volume B2B marketer.
Will it ever work?
All that said, there can be a place for A/B testing in your marketing mix.
For example it can prove useful at the top of your funnel where you have higher volume activity, for example content downloads, webinar sign ups, or search ad copy.
That said, beware the B2B clickbait trap, whereby you’re making changes to your paid campaigns to flatter your click through rates—all at the expense of relevant, quality leads.
If you want to find out more how to optimise and test your B2B digital marketing campaigns or have any questions about B2B digital best practice, we’d love to chat. Get in touch!