How to audit your Meta Ads account in 30 minutes
A step-by-step framework for auditing any Meta Ads account using 5 health pillars. Identify wasted spend, weak creatives, and structural problems in under 30 minutes.
Why you need an ads audit (and why most people skip it)
Most advertisers set up campaigns, check ROAS once a day, and call it management. That is not management. That is hope with a dashboard.
A proper audit gives you a clear picture of what is working, what is bleeding money, and what needs to change right now. The problem is that most people think audits are long, boring, and complicated. They do not have to be.
This is the exact framework I use to audit Meta Ads accounts. It covers 5 health pillars, takes about 30 minutes, and catches 90% of the issues that eat into your profit margins.
The 5 health pillars of a Meta Ads account
Every Meta Ads account can be evaluated across five areas. If any one of them is weak, the whole account suffers. Here is how to check each one.
Pillar 1: Budget Efficiency
Budget efficiency is about making sure your money goes where it actually produces results. You would be surprised how often ad accounts have campaigns running that nobody remembers turning on.
What to check:
- Active campaign count. If you have more than 8 to 10 active campaigns, you are almost certainly spreading budget too thin. Meta's algorithm needs enough data per campaign to optimize properly. Too many campaigns means none of them get enough volume.
- Cost per result trends. Pull a 7-day and 28-day comparison. If your cost per result has climbed more than 20% in the last month, something is off.
- Budget allocation vs. performance. Sort campaigns by spend, then by cost per result. Your highest-spending campaign should also be one of your best performers. If it is not, you are burning cash on autopilot.
- Minimum daily budget per ad set. Each ad set should spend at least $20 per day to exit the learning phase in a reasonable timeframe. Anything less and Meta cannot optimize effectively.
Tip: Export your campaign data to a spreadsheet and add a column for "cost per result rank" vs. "spend rank." If those two columns do not roughly match, your budget allocation needs work.
What "good" looks like: 3 to 6 active campaigns, budget concentrated on top performers, cost per result stable or declining over 28 days.
Pillar 2: Creative Performance
Creatives are the single biggest lever in your ad account. The algorithm can only work with what you give it. Bad creatives mean bad results, no matter how smart your targeting is.
What to check:
- Number of active ad variations per ad set. You want at least 4 to 6 variations per ad set. Fewer than that and you are not giving Meta enough options to test. More than 10 and you are diluting data.
- Ad format diversity. Are you running a mix of video, static image, and carousel? Accounts that rely on a single format leave performance on the table. Video alone should make up at least 40% of your creatives.
- CTR by creative. A healthy click-through rate is above 1.5% for most industries. If your best ad is below 1%, your creatives need a refresh.
- Creative fatigue signals. Look for ads where frequency is climbing and CTR is dropping simultaneously. That is the textbook sign of creative fatigue.
Tip: Sort your ads by CTR descending and look at the top 3. What do they have in common? That is your creative direction for the next round of ads.
What "good" looks like: 4+ variations per ad set, mixed formats, top creatives above 1.5% CTR, no ads running with frequency above 5 and declining CTR.
Pillar 3: Campaign Structure
Structure is the skeleton of your account. A messy structure makes it nearly impossible to read data clearly or scale what works.
What to check:
- Naming conventions. Every campaign, ad set, and ad should follow a consistent naming pattern. Something like
[Objective] - [Audience] - [Date]works well. If your campaigns are named "Campaign 1 copy copy," you have a problem. - Objective alignment. Each campaign should have one clear purpose. Do not run a traffic campaign and expect conversions. Match the campaign objective to the actual goal.
- Funnel stages. At minimum, you want a prospecting layer (cold audiences) and a retargeting layer (warm audiences). More advanced accounts add a retention layer for existing customers.
- Ad set overlap. Check the audience overlap tool in Ads Manager. If two ad sets are competing for the same people, you are bidding against yourself and driving up costs.
What "good" looks like: Clean naming conventions, 2 to 3 funnel stages, no audience overlap above 30%, objectives matched to business goals.
Pillar 4: Audience Quality
Targeting has changed a lot since iOS 14.5. Broad targeting works better than it used to, but that does not mean you should ignore audience quality entirely.
What to check:
- Audience size. For prospecting, aim for audiences of 1 million or more in most markets. Too narrow and you will exhaust the pool quickly. Too broad (50 million+) and Meta may struggle to find the right people without enough conversion data.
- Custom audience freshness. If your retargeting audiences are built on 180-day website visitors, you are targeting people who have mostly forgotten about you. Stick to 7 to 30 day windows for retargeting.
- Lookalike source quality. Your lookalike audiences are only as good as the seed list. A lookalike based on "all website visitors" will underperform one based on "purchasers in the last 90 days."
- Audience exclusions. Make sure you are excluding existing customers from prospecting campaigns and recent converters from retargeting. Without exclusions, you pay to reach people who already bought.
What "good" looks like: Prospecting audiences above 1 million, retargeting windows under 30 days, lookalikes built from high-value seed lists, proper exclusions in place.
Pillar 5: Delivery Health
This is the technical layer that most advertisers ignore. Delivery issues can silently kill even the best campaigns.
What to check:
- Learning phase status. If more than half your ad sets are stuck in "Learning Limited," your account structure needs adjustment. Each ad set needs roughly 50 conversions per week to exit learning.
- Conversions API (CAPI) setup. Check Events Manager to confirm CAPI is active and sending events alongside the pixel. Without CAPI, you are losing 20% or more of your conversion data due to browser tracking restrictions.
- Event match quality. In Events Manager, check the match quality score for your key events. You want a score of 6 or higher (out of 10). Below that, Meta cannot match conversions back to ad clicks reliably.
- Delivery insights. For any underperforming campaign, check the Delivery Insights panel. It will tell you if the issue is auction competition, audience saturation, or creative fatigue.
Tip: If CAPI is not set up yet, make it your top priority. It is the single biggest technical improvement most accounts can make in 2026.
What "good" looks like: Most ad sets out of learning phase, CAPI active with match quality above 6, no delivery alerts on active campaigns.
Putting it all together
Run through these 5 pillars in order. Budget first, because there is no point optimizing creatives if the money is going to the wrong campaigns. Creative second, because it is the biggest performance lever. Structure third, audience fourth, delivery fifth.
The whole process should take 25 to 35 minutes once you know what to look for. Do it weekly and you will catch problems before they become expensive.
Here is a quick scoring system:
Let Campaiyn do this for you
This audit framework is exactly what Campaiyn's Health Score is built on. Instead of manually pulling data and checking each pillar every week, Campaiyn monitors all five pillars continuously and flags issues the moment they appear.
Your AI copilot catches budget waste, creative fatigue, audience overlap, and delivery problems automatically. So you can spend less time auditing and more time building campaigns that actually perform.