My goal with this post is to show you the techniques that I am using right now to scale several clients Facebook ads so you can test them in your own ad accounts. I have included several screenshots for proof. This is a longer post but I hope you will find that it’s well worth the read.
Overview
- Timeframe: June 1, 2024 to August 26, 2024
- Niche: Apparel
- Country: United States
- Ad spend: $74,992.35
- Orders: 3,519
- Revenue Generated: $179,793
- Average ROAS: 2.39x
- Screenshot: Shopify Results
I started working with this client at the end of April 2024. This client is an experienced dropshipper. We spent the month of May testing various products to see if we could find a winner. We kept the tests small and simple (CBOs, 1-3 ad sets, 3-5 static ads in each (same ads in each ad set), $50-150/day) and we would close them after ~2-3 days if we weren’t at least breaking even on them.
The first two products that we tested barely broke even or lost money. The third product instantly had 3.23x ROAS which is well above their breakeven.
- The winning campaign from our initial tests was a simple $100/day CBO with 1 ad set, 2 best performing static ads from previous tests, original audience broad, 18-65+, specific gender and adv+ placements. The ads had multi-advertiser turned off and it only had 1, primary text, 1 headline and 1 description. CTA was Shop Now and it linked directly to the product page.
Once we saw consistent high ROAS for a few days, we knew we had a winner. We ended the month at a slight loss but we were very optimistic for the future.
Static Images -> Rapid Testing
We are only using static images creatives in this ad account.
Although I generally believe Meta is moving towards videos, static images have 2 very important competitive advantages over videos when it comes to ads – they generally take less time and less money to create.
I believe that if you can make static images work in your ad account then you will be able to rapidly test various styles, angles and offers. You can test 20-50 static images in the time it takes you to hire, film and edit 1 UGC style video. You only need 1 winning ad to completely change the course of your business this year. The more you systematically test, the higher your chance are of finding it.
Scaling Techniques I Used
I horizontally scaled (duplicated campaigns) and used the Crazy Method with the same winning creatives several times with great success. I would launch a new campaign and wait approximately 5-7 days before launching a new one. I tested vertically scaling campaigns (increasing budgets) but found that the results would drop pretty substantially for the next few days when I would do that. I noticed that when I duplicated campaigns, the original campaign sustained its performance and the new campaign worked as well.
Why do I wait a few days in between launching campaigns? I personally think of the algorithm like a snow globe. Every time you make an edit, close something or launch a new campaign its like shaking the snow globe. The algorithm takes a little time to ‘settle’ back to normal. I like to let the algorithm settle for some time before shaking the snow globe again. If you are launching several campaigns at once or launching one campaign after the next every day or constantly editing your ads, its like you are constantly shaking the snow globe and not letting it settle. Facebook ads is volatile enough. What I want is to create as much stability as possible.
My goal was to find a sweet spot that was at a decent scale but still highly profitable. My client is happy with consistent $800-$1,200 profit days but we continue to try to push the boundaries and scale further.
When should you scale? As mentioned in my other post, I always look for 2 key things before scaling: 1.) ROAS well above breakeven and 2.) Consistent volume of sales. At the beginning of June, I noticed that we had both.
Isolate Winners. When we first began testing the winning product at the end of May, we were using 6 different static creatives. We noticed that all the sales came from 2 creatives. Moving forward we only included those 2 static creatives in the new campaigns that we launched. Simple and effective.
Increase Budgets on New Campaigns. With each new duplicated campaign I would increase the daily budget to push the boundaries and see what it could sustain. Our original campaign started at $100/day. Our first duplicated campaign started at $250/day. I tried $300/day on some but the results were not good. We found that $250/day was our sweet spot so we continued to launch our new campaigns at $250/day.
Scaling with the Crazy Method
What is the Crazy Method? This is a technique created by Konstantinos Doulgeridis that I have successfully used several times across accounts. The premise of this technique is to take something that already works in your ad account and amplify it. This technique works best with broad audiences but I have used it for local lead generation to great effect as well. A typical Crazy Method campaign looks something like this:
- CBO Campaign
- Winning Ad Set 1
- Winning Ad 1
- Winning Ad 2
- Winning Ad Set 1 [duplicate as-is]
- Winning Ad 1
- Winning Ad 2
- Winning Ad Set 1 [duplicate as-is]
- Winning Ad 1
- Winning Ad 2
- Winning Ad Set 1
As you can see, its a CBO Campaign with the same ad set and ads duplicated several times inside. The theory is as follows:
- When you target a large audience (75 million in our case) each ad will start to optimize in a sub-group of the larger audience. The ad first optimizes off of engagement data (likes, comments) and then conversion data (sales).
- If the first few purchases were made by your ideal customer avatars then the ad should start finding more and more people like them and start to do well. We commonly refer to this as ‘hot pockets.’ Before I knew this theory I used to experience certain ads optimize really well and sustain their performance and I would describe it as ‘hitting a vein’ but ‘hot pockets’ definitely sounds nicer!
- When we structure a campaign this way, we are ‘forcing’ the algorithm to try to find several different hot pockets within the broad audience. The more ad sets you have the more chances you have to find a hot pocket. At the same time, the more ad sets you have the more difficult the campaign can be to manage and optimize. There is a sweet spot for how many ad sets you do in my opinion.
- In practice, some of the ad sets will do well and some will do poorly. We optimize the campaign by closing the ad sets that are doing poorly and keep the ones that are doing well.
Here is a screenshot of the setup of one of the crazy method campaigns that I set up at the beginning of July that is still running today. Here is a screenshot of the results at the ad set level. Take note of the optimization I did as well.
How many ad sets should I do? The rule of thumb that I use is to set a campaign budget in such a way that each ad set should be able to get 1-2 sales per day. For example:
If my average cost per purchase is $50 and I wanted to do 5 ad sets, then I would do a campaign budget of no less than $250/day so that each ad set has a chance of generating at least 1 sale per day.
I would not do 15 ad sets with a $250/day budget because then I am only giving each ad set ~$16.66 dollars to work with (well below the average cost per purchase of $50). Remember, our ads optimize off of conversion data so if we are not giving it enough ‘juice’ to get a conversion then it is unlikely to optimize well.
How do you optimize a Crazy Method campaign? We optimize Crazy Method campaigns just like any other CBO campaign with multiple ad sets. I follow these rules of thumb when optimizing at the ad set level.
- I let the campaign run at least 2 days before starting my optimization. *I wait longer with smaller budgets and I wait less with bigger budgets. This is also dependent on your average cost per purchase.
- I always monitor the average results of the ad sets.
- I based optimization decisions off of 7-days of data. I also take into account Max results, last 30 days, last 14, and last 3 days to see the trends. I will also walk through the progression of the campaign, day by day to see how Facebook is deciding to spend among the ad sets.
- I try to ‘touch’ the campaign as little as possible. If I am going to touch something, first I ask myself – “is this change worth doing? What do I stand to gain? What do I stand to lose?”
- If I see an ad set taking a lot of budget with poor results, I know that I need to intervene and turn it off otherwise the campaign may optimize in a bad way and not recover.
- When I turn an ad set off, I understand that whatever budget that ad set was previously spending will be ‘liberated’ to the remaining ad sets. If I ‘liberate’ too much budget to the remaining ad sets, I could throw off the balance of the whole campaign.
- When I decide to close ad set(s), I can ‘signal’ to Meta that I want the campaign to ‘stay as it is’ by lowering the overall budget by the amount the ad sets that I closed spent the previous day.
- For example: I have a CBO campaign at $100/day with 4 ad sets, 2 of them are doing well and 2 are doing poorly. Lets say the 2 that did poorly spent a combined $60 out of the $100/day. If I wanted to close those two ad sets and ‘signal’ to Meta that I want the remaining ad sets to ‘stay as they are’ then I would close the 2 ad sets and lower the budget by approximately $50-60 so that the remaining ad sets don’t scale too much by the budget that was liberated by the ad sets that were closed.
- I like to use the search bar > filter by selection to determine if closing an ad set is ‘worth it.’
- For example: If I have a CBO campaign with 4 ad sets. Let’s say 2 of them are doing well and 2 are doing poorly. The average ROAS of all of the campaigns is 1.5x. I will select the 2 ad sets that I want to keep, click ‘filter by selection’ and see what the ROAS could theoretically be if I only had those 2. Let’s say the ROAS with just those 2 ad sets was 3.5x. Then I would most likely close the other 2 ad sets because that trade off is worth it to me. Let’s say the ROAS with just those 2 ad sets is 1.9x. Then I might not close the other 2 ad sets because going from 1.5x to 1.9x is not a big enough improvement to risk ruining entire campaign.
- Sometimes the campaign will still fail no matter how well you optimize it. If this happens, I close it and try again.
Why I Don’t Care About Audience Overlap (And I Don’t Think You Should Either)
When we target a large population (like 75 million) our ad is shown to a sub group of people within this massive population. Let’s say hypothetically 1,000 people in 1 day. When we duplicate a campaign and target the same exact audience of 75 million, it does not necessarily show it to the same 1,000 people as before. The ad leverages your pixel data, ad account data, audience settings and creative to choose a ‘starting point’ for the ad and then the ad starts to optimize off of engagement and conversion data. Each ad takes its own path and optimizes in a different way based on who interacts with it.
If it did target the same people over and over, it would surely drive up our cost per result and simply not work – but that is not what I see in practice.
I find many advertisers are falling into ‘analysis paralysis’ when they start to think about audience overlap and it generally leads to a deterioration of their confidence. I personally would not worry much about audience overlap unless you are working with small audiences or massive daily budgets. You can always keep an eye on your frequency metric to understand on average how many times someone is seeing your ad. Even if there is some audience overlap, I personally don’t really care as long the results are good.
Next…
Facebook Page Feedback Score & Why It Matters
I believe a big contributing factor to our success was the fact that my client maintained a high Facebook Page feedback score which kept our CPMs low.
What is the Facebook Page Feedback Score? Your Facebook page receives a Page Feedback Score that ranges from 0 to 5. This score is determined based on customer survey feedback and is focused around 5 key areas: Product Quality, Order Accuracy, Shipping, Refunds & Exchanges and Communication. The higher the score the better. If your score dips below 2, your page may get temporarily restricted from advertising. If it falls below 1, your page will lose its ability to advertise.
How does it work? When someone orders a product off of your Facebook ad they will receive a survey through their Facebook notifications after a certain amount of time (based on your set delivery speed.) The customer will receive questions related to the 5 key areas. Your Page Feedback Score is usually updated every Wednesday morning.
You can check your set delivery speed by going to Business Support Home > Pages > Scroll to the bottom > Set Delivery Speed.
Why does this matter? Meta states in this article that it considers the ads from Pages with low feedback scores as low quality ads. Meta states that low quality ads may reach fewer people for the same budget. (aka higher CPMs) It is logical to assume, then, that the higher your page score the lower your CPMs.
Thankfully, my client was on top of his game. He carefully monitored his Page Feedback Score and made sure that each of the 5 areas was on point. He focused heavily on customer service and handled any refund requests promptly. As a result, his Page Feedback Score stayed at 4.1 (Good) and we benefited from lower CPMs.
The moral of the story is that if you want to enjoy the benefits of low CPMs and high quality traffic then make sure you are delivering a great experience for your customers in the 5 key areas.
Where are we now?
Overall, we are very happy with the results of the scaling so far. We are continuing to test new creative, new angles and new products all together. Our goal is to establish as many evergreen campaigns as possible at a high spend so that we can scale even further during BFCM. I believe we are well positioned to absolutely crush Q4.
Here is a summary of the results:
Screenshot of June Results
Screenshot of July
Screenshot of August
Screenshot of June – August
Thanks for reading! If you found this helpful, please share it with someone so they can learn from it as well. If you have any thoughts or questions, please leave a comment and I will respond. Best of luck out there!