A/B testing ads

The pros and cons of live-testing your ads

August 12, 2020
Jack Warren

Live-testing ads is currently the industry standard. Platforms like Facebook and Google offer multiple options to make live testing as simple as possible for marketers. These tests generally take the form of A/B (or ‘split’) testing, in which two ads with a differing variable are presented to a real audience in real time, and marketers draw insights from their performance.

For example, if testing the placement of a logo on Facebook, the tester would create two ads: Ad A with the logo in (say) the top left, and Ad B with it in the top right. Facebook would then run these ads concurrently (showing them to different sections of the same audience segment), and present the marketer with a performance analysis of both.

If ad B performs better than ad A, the marketer can infer that their audience prefers the logo to be in the top right.

Sounds simple, right? Well, it is. But it’s also far from perfect.

While it’s good at certain things, this form of live testing also has a lot of drawbacks. Let’s take a look:

The cons of A/B testing

It’s resource-intensive

If you’re paying your staff by the hour, A/B testing is going to take some serious bites out of your budget.

From setting up the testing system to meetings about which variables to include, A/B testing takes up an enormous amount of time before the tests even go live. Then, depending on how much traffic your testing channels experience, the tests themselves can take weeks or even months before they produce actionable results.

All that time, remember, you’re not only paying staff and contractors – you’re also paying platforms like Facebook and Google to run your test ads, even though these adverts may not make the final cut.

As with anything that involves a huge amount of people (some of which, remember, aren’t aware that they’re involved!), A/B testing can be a bit of a behemoth, which takes ages to set up, costs a lot of money, and is unwieldy when running.

It risks your reputation if you get it wrong

You may be using your audience as guinea pigs, but they don’t know that. From their point of view, they’re looking at a real, live advert. If you get it wrong, the impact on audience members who’ve seen that advert will be the same as if you got it wrong in the live campaign.

This is why people tend to use A/B testing to refine minor variables (button placement, for example) rather than larger, potentially riskier elements like messaging and imagery. However, it’s still not ideal to show a portion of your audience a lower performing advert. In this world of instant communications and infinite choice, every little impression counts.

It shows your competitors what you’re up to

If your audience can see the ads you’re testing, so can your competitors. It’s not ideal to leak something that’s still in development. Keeping your innovative tactics under wraps until the campaign is ready is the best way to prevent your competitors from gazumping you – but A/B testing has the potential to spill your most closely guarded secrets.

It doesn’t give good insights

A/B testing is useful for drawing very specific conclusions about very specific variables. If you just want to know whether background A performs better than background B, A/B testing can give you the results you need.

What it won’t do, however, is give you any context or insight around that result. It won’t tell you, for example, that background A performed better because people with impaired visibility couldn’t read your text against background B. And it certainly won’t tell you anything about deeper issues which may be affecting your ad performance.

Say, for example, that your audience finds your campaign messaging in general pretty ‘meh’. It’s just not doing anything for them. A/B testing will help you to make minor tweaks and changes here and there to your ads’ formats, but it won’t give you any insight into the actual problem. It can only show you tiny portions of the picture – not the broad scrutiny that you need to draw major and actionable campaign insights.

The pros of A/B testing

Having said that, there is a place for A/B testing in marketing. Marketers aren’t stupid. We wouldn’t use it if it wasn’t useful. Here are some of the pros of A/B testing.

It lets you make gradual changes

A/B testing is useful for small, ongoing alterations in your ads. For things like altering subject lines, CTAs and so on, A/B testing allows you to pinpoint which way to go and make the change without having to pause the campaign.

It occurs in the ‘real’ environment

Live A/B testing uses your real audience as guinea pigs. The advantage to this is that you can be reasonably sure that the audience will respond in the same way to the final ad as they did to the test ads.

However, in order to get the best ‘live’ results possible, you have to ensure that everything else about the test is as close to authentic as possible – which can be tricky when you’re designing ads specifically to test particular variables.

It gives clear results for specific questions

If you want to know things like whether a red or a green CTA button will get more clicks, A/B testing can give you a clear and definitive answer. For clear, performance-based results concerning very specific variables, live A/B testing is perfect.

Natural testing – the alternative to live testing

At Datasine, we are empowering marketers with an alternative way of testing. Welcome to the world of pre-campaign optimisation.

This involves using high-powered and intelligent AIs to break down and analyse past campaign elements, and cross-reference them with engagement and performance metrics.

Using past campaign data is like studying audience behaviour ‘in the wild’, without disturbing the delicate ecosystem of your advertising ‘habitat’. It gives a more accurate picture of how your audience responds to your ads when they’re doing their proper job – advertising – rather than serving as test pieces.

What’s more, AIs like ours are capable of ‘atomising’ all kinds of data, including creative data. Our AI breaks down ad data into thousands of constituent parts, enabling multiple granular comparisons on pretty much every level.

With an atomised process like this, marketers are able to test and draw deep insights about many, many different aspects of their creative content. They’re also able to run a number of tests quickly and efficiently, without ever having to pay a penny to their advertising platforms.

Intrigued? There’s more information on our content atomisation process and how it helps with testing here.

Or, to book a demo, contact our team here.

Related posts

Let's work together.

Find out more about how Datasine empowers marketers with intelligent, data-driven insights to improve the performance of your marketing campaigns.

Share This