Probably you have never tested the design of your catalog ads.
It makes sense then that something could go wrong. Even if you've been advertising on Facebook for years.
We made many mistakes in the hundreds of tests with our clients.
Fortunately, we have learned from these mistakes. We have collected the most common ones in this article, so you don't have to make them again.
Let's get started!
1. Comparing a new campaign with an existing campaign
Is it worth using design in your Facebook shopping ads?
That's probably what you, your client, or your boss want to know when you start with Adflow.
It sounds logical to set up a new campaign and compare it to your ongoing shopping or regular campaigns.
But beware: this is where things often go wrong because your ongoing campaigns probably:
👉 Perform better because they are out of Facebook's learning phase.
👉 Have slightly different settings (see common mistake 2)
In short, your test results do not give you a fair picture. Moreover, shopping campaigns continue to perform better and better over time(see common mistake 3). Manual ads, on the other hand, tend to perform less and less.
Fortunately, you can do something to avoid this mistake.
If you want to compare shopping campaigns with and without an overlay, make sure you:
👉 Create a 'control' campaign with your original feed (without overlay)
👉 Use the A/B testing features on Facebook
How your setup should look on Facebook 👇
You can also do this if you want to compare a shopping campaign to another type of campaign. Ensure all other settings are the same (error 2) and that you maintain enough ad spend and time (error 3).
2. Not using the same campaign settings
A second common mistake is to compare your campaign with another campaign that is not identical besides the one thing you want to test.
A tiny checkmark in your audience settings can make your test results useless.
Is the difference in your results because of the checkmark or because of your design changes?
If you want reliable results, ensure all settings except what you want to test are the same.
I recommend you only test different catalogs overlays at the beginning. So make sure these settings are identical👇
Is your campaign objective the same?
Are you using the same optimization event?
Later in this article, I'll explain why this setting is crucial.
Are you using the same campaign or ad set budget?
The amount of spend affects your performance. If you want to compare campaigns, it is better to use the same budget.
Do you use the same audience?
Pay attention to the 'Advantage detailed targeting' checkbox. As Facebook says, this influences your performance.
Are you using the same placements?
Whether you use automatic or just on-feed placements, ensure they are the same.
Are you using the same catalog settings?
Note the 'Advantage+ for creative catalogs' checkbox. If you turn this on, ensure you do it for all your campaigns.
Are you using the same product set?
Are ad headlines, texts, and call-to-action the same?
Now you know where things can go wrong. But how do you avoid differences in these settings?
There is a simple solution to set up an accurate test: first, entirely set up one of your test campaigns. When satisfied, duplicate the campaign by the number of variants you would like to test. Do not touch settings apart from the chosen catalog in your copied campaigns.
3. Drawing conclusions too quickly or spending too little
Compared to regular campaigns, catalog campaigns need more time to learn. If you spend too little, your test results may not be reliable.
Why is that?
Imagine this: you have a product range of 1,000. If you advertise your entire catalog, Facebook has to learn which of the 1,000 product images it has to show to potential customers. The algorithm needs time and budget to determine this.
Over time, Facebook knows better and better which products to show to which type of people. Catalog campaigns, therefore, usually perform better and better.
You are also less likely to run into ad fatigue with catalog ads. Is your audience done with one product? Facebook has 999 other products to show. You don't have to keep coming up with new creatives, which is a massive advantage.
So it's well worth the effort to test your catalog creatives. But, how much budget and time do you need to test them properly?
It depends on your objective and how many catalog designs you want to test.
Use this calculator to determine your testing budget 👇
We recommend you optimize for purchases if your budget is sufficient. If not, consider choosing an objective lower in your funnel.
4. Comparing metrics other than your goal
When comparing campaigns, evaluate only the metrics you've asked Facebook to optimize.
For example, you should not judge your campaigns on click-through rates, add-to carts, or cost-per-click if you ask Facebook to optimize for purchases.
Why is that?
If you optimize for purchases, Facebook's algorithm doesn't care about anything else. Facebook doesn't care whether you will get clicks or a high click-through rate. Their only concern is to get you as many purchases as possible.
If Facebook isn't concerned about other metrics, why should you?
People often fall back on other metrics when their test campaigns made too few purchases to identify a clear winner (see common mistake 3). It's a mistake to look at other metrics, then. If your budget is too low, you should choose a different optimization objective and evaluate your campaign accordingly.
5. Testing too small differences
When you start testing your catalog creatives, you shouldn't test too small differences.
Testing minor changes reduce your chance of finding a clear winner.
Aim to test your creatives for high impact, especially at the beginning.
So don't test:
- Red button vs. blue button.
- Unique Selling Point A vs. B
- Large price vs. small price
- Round price tag vs. square price tag
👉 Entirely different layouts.
👉 Unique selling points vs. Reviews in your design
👉 Lifestyle images vs. pack shots
👉 Mentioning or not mentioning the price