Facebook Ads A/B test guide

November 20, 2019

What is A/B testing?

Facebook A/B testing is when you experiment with various elements in your advertising campaigns to decipher what drives higher performance. As you discover things that perform well (and implement them) as well as things that hinder performance (and eliminate them), you should see your campaign performance gradually increase. In Facebook Ads, A/B tests can be through the platform’s split testing functionality.


How to A/B test in Facebook Ads

The first thing you need before you begin A/B testing is a hypothesis. In the example below, we test the hypothesis: photo ads work better than graphic-only ads.


The results from our Facebook Ads A/B test

The results above clearly verify our hypothesis that ads with photos perform better than graphic-only ads.

But one test alone isn’t enough to test our hypothesis. We need to conduct several ‘photo vs graphic-only’ experiments to verify or falsify our hypothesis with certainty, because there could be many factors responsible for one ads’ success over the other. 

What to A/B test in Facebook Ads?

Ad visuals 

Most people running campaigns in Facebook Ads have done some research into or have experience of the audiences, devices and even copy that tend to perform well. And even if they don’t, Facebook Ads’ platform does an incredibly good job of optimising automatically using machine learning so long as you have a diverse number of ad sets in your campaigns. 

There is one thing however which is the most crucial to a successful Facebook Ads campaign: your ad visual. It is the element that stops your target audience in their tracks and grabs their attention. All other aspects of your ads (e.g copy and landing page) really only come into play if you have a good visual to get your target audience to look at your ad in the first place. 


But while Facebook’s platform is really good at optimising many aspects you can set at ad set level – such as audience, placements, device, etc. – it’s not as straightforward when it comes to your Facebook Ad visuals. 

Suppose you put two different images in your ad set, one close-shot and one long-shot of a person holding your product: if the long-shot does better, it doesn’t necessarily mean that it was the long-shot aspect that was the cause of success. It may be that it was actually the colour blue (featured more in the long-shot) which drove the higher performance. And, in fact, a close-shot with the emphasis of blue actually performs better than long-shots.

While you can test ad visuals using A/B testing to an extent – and you definitely should to get the most out of your Facebook Ads A/B testing – there are infinite variations to test, so this could end up being an endless experiment. Instead, focusing on the features in your ad visuals and conducting semantic content analysis can help cut back time and money spent on experimentation, while increasing ROI.


Content features our AI tool, Connect, found in this image

But, in order to scale this process, a marketer needs an AI or a machine learning model which can conduct semantic content analysis in moments.

This is where Connect can help you. The AI platform analyses your historical performance and gives you insights into what features are resonating with your audience, and which aren’t. Better still, it can use this information to rate and rank your ad visuals according to how effective they are going to be. 

So going back to our example, Connect’s AI will analyse the performance of your various ads and all the aspects within your ads and tell you that it was the colour blue that made the ad appealing. With this tool, therefore, you can supplement your investigation and creativity with data to save you time and money. 

Ad copy

Much like ad visuals, ad copy is something that requires the common sense and creativity marketers’ possess rather than something Facebook Ads’ platform can optimise with its machine learning alone. 

But, ad copy can still be optimised with Facebook’s machine learning algorithm through A/B testing. For example, testing exclamation marks, emojis, capitalisation and certain phrases can be done quite easily with dynamic ads and can show what copy is resonating most with your audience.


Other elements

Here’s some other elements to test but we recommend where possible to leverage  Facebook’s machine learning instead of manually A/B testing. For instance, let’s say you want to test different placements, it’s easier to select automatic placements with conversion optimisation and allow the platform to gradually select the highest-performing placement for your campaigns.

  • Interest-detailed targeting
  • Demographic-detailed targeting
  • Gender
  • Age
  • Geo-location
  • Landing page


Three Facebook A/B testing rules

  1. Ceteris paribus: The Latin phrase meaning “other things equal”. In our context, it means that Facebook A/B tests should only test one element at a time. So if you’re testing blue vs red, absolutely everything else (audience targeting, placements, ad copy, etc.) must be the same.
  2. Statistical significance: Just because an experiment seemingly verifies or falsifies a hypothesis, it doesn’t mean that it actually has. In order for the test to be valid, you have to have a sufficient amount of volume to your ads. There are plenty of statistical significance calculators on the web to help you with this. However, sometimes it’s not just about a simple calculation but about using common sense. For example, in our graphic-only vs photo test, there are many elements involved which cannot be controlled so it makes sense to test a diverse range of photo and graphic-only ads to get an accurate result from your experiment.
  3. Performance compass: Let performance, specifically cost-per-conversion, guide and measure your results. An increase in CTR is great but if it came at the cost of a higher CPA then, although there is some learning that could be derived, you haven’t directly found anything that drives higher performance for your campaigns.


How to set up a split test in Facebook Ads

Step 1: Create a campaign.

Step 2: Turn the split test toggle on. (Note: The campaign budget optimisation toggle has to be turned off to be able to do this.)

Step 3: Choose a variable (creative, delivery optimisation, audience, placement and more than one).

Step 4: Choose the number of ad sets (from two to five).

Step 5: Set the aspects you want to test separately to those which you do not. For example, if you wish to test ‘Creative A’ vs ‘Creative B’, then select the ad sets and edit at the settings at the same time since all these should remain constant. After that, you can put in the creatives in each ad set in turn.

For more detailed instructions for split testing on Facebook Ads, check out Facebook’s own split testing guide.


Related posts

Let's work together.

Book a demo to see how Datasine empowers marketers with intelligent, data-driven insights to improve the performance of creative content.

Share This