Enter campaign signals
Move sliders (or type values) and click Calculate. Want “live” updates? Turn on Auto‑Update.
Are your UTMs clean enough to trust… and is the campaign performing? This calculator combines two things into one simple score: Tracking Quality (UTM hygiene) + Performance (CTR, conversion rate, ROAS, engagement). Use it as a fast “audit” before you scale spend.
Move sliders (or type values) and click Calculate. Want “live” updates? Turn on Auto‑Update.
Think of this score as a two‑part audit. Part one checks whether your tracking is clean enough to trust. Part two checks whether the campaign has early signals that it’s worth optimizing or scaling. Most marketers fix performance first — but messy UTMs quietly sabotage everything: your dashboards lie, your attribution gets fragmented, and you start debating opinions instead of numbers.
That’s why the calculator returns an overall score (0–100) plus two subscores: Tracking Quality and Performance. We weight them 45% Tracking and 55% Performance. The split is intentional: clean UTMs are a prerequisite, but you still need results. If your UTMs are perfect and the campaign is weak, the score shouldn’t be high. Likewise, a campaign that looks great but has sloppy UTMs is risky to scale because you’re likely to misread what’s working.
Tracking Quality has three ingredients: presence, consistency, and taxonomy hygiene.
Presence means your links reliably include the three UTMs that matter most in reporting:
utm_source, utm_medium, and utm_campaign.
Without them, traffic often lands in generic buckets, and you lose the ability to separate campaigns
(especially when you’re running multiple offers at once).
Optional UTMs (utm_content and utm_term) can be powerful when used
consistently. utm_content usually distinguishes creative variations (image A vs image B,
headline A vs headline B). utm_term is most useful in paid search for keyword themes,
but it can also label audiences or placements if you have a naming standard. Because optional UTMs are not
required for every channel, we give them smaller weight.
Consistency and taxonomy are scored on a 1–10 slider. Consistency answers: “If I open a spreadsheet of UTMs,
do I feel calm or do I want to scream?” A 10 means the team uses one naming style (typically
lowercase, underscores, no spaces), avoids duplicates, and doesn’t change names mid‑flight. A 1
means you have endless variations like Facebook, facebook,
fb, FBAds, and paid-social.
Taxonomy hygiene answers: “Do our UTM values clearly map to reporting dimensions?” For example, a clean taxonomy
might define utm_medium=paid_social and utm_source=facebook,
while email might be utm_medium=email and utm_source=beehiiv
(or your ESP). If taxonomy is inconsistent, you’ll end up with mixed channels in your reports and fuzzy decisions.
We compute Tracking Quality like this:
In formula form:
TrackingQuality =
PresencePoints + (Naming − 1) / 9 × 10 + (Taxonomy − 1) / 9 × 10
Performance is intentionally channel‑agnostic: it uses metrics most teams can pull from Ads Manager + Analytics within minutes. We use four parts, each worth 25 points:
We normalize each metric into a 0–25 range using common “good enough” thresholds. This is not universal truth — it’s a practical default that works for many DTC and SaaS funnels. You can mentally adjust expectations based on your category. For example, a newsletter signup funnel might have high CVR but lower ROAS visibility if you monetize later.
The key idea: you don’t need precision to use this tool. You need a fast, consistent way to compare campaigns. A campaign score jumping from 52 → 73 after cleaning UTMs and improving the landing page is a story you can share with your team — and that “story” often creates virality because it’s relatable.
Imagine you’re running paid social. The ads are working: impressions are high, CTR is solid, and conversions are
coming in. But half your links are missing utm_campaign, and your naming is inconsistent.
Your overall score will land in the “workable but risky” zone.
In this scenario, everything is tagged perfectly. But CTR is low (message mismatch), conversion rate is low (landing page friction), and engagement is weak (high bounce, low time on page). The score stays low even though tracking is clean — which is exactly what you want. You don’t want perfect tracking to “hide” weak performance.
utm_content to label creative variants so you can compare winners.Most teams have one “hidden hero” campaign that jumps into the 70s simply by fixing UTMs and cleaning up naming. That’s why this tool is fun to share: it turns a boring analytics problem into a single number with a clear next step.
If you want this calculator to be more than a one‑off curiosity, use it like a weekly checklist:
Over time, you’ll build a “score history” that reveals which campaigns are becoming healthier. This is useful for founders and solo marketers because it reduces the mental load: you always know what to fix next.
If you don’t already have a naming system, start with a minimal, boring standard. Boring is good. Boring scales.
winter_launch not Winter Launch).trial_offer_newyear_2026w01).hook_a_video_15s).anxiety_keywords, lookalike_3p).Once you adopt a standard, your reporting becomes clean. That cleanliness often feels like “free growth” because it unlocks faster learning loops — and learning speed is the real compounding advantage in marketing.
It’s a practical heuristic — not a research paper. The value is consistency. If you score campaigns using the same method each week, you can compare and improve quickly.
Set revenue to $0 and treat ROAS as “unknown.” Your score will lean on CTR/CVR/engagement until you have better attribution. If you’re lead‑gen, you can substitute “estimated value per lead” as revenue for directional ROAS.
Not necessarily. If you can’t keep them consistent, don’t use them. It’s better to have a clean trio
(source/medium/campaign)
than five fields filled with chaos.
Usually message‑match or landing page friction. People are curious enough to click, but the offer isn’t clear, the page loads slowly, or the next step is too demanding. Improve above‑the‑fold clarity first.
You’re converting, but the economics may not scale: spend is too high, AOV is too low, or you need upsells/retention. Use your LTV and CAC tools to validate profitability.
Bounce rate and time on page act like “quality filters.” A campaign can look good short‑term but attract mismatched clicks. Engagement helps you spot when you’re paying for the wrong attention.
Yes. Email is a great use case because UTMs are fully in your control. Use consistent source (your ESP), medium (email), and campaign naming per newsletter send or automation.
No — it summarizes. Think of it as a dashboard “compression” tool: you feed in high‑level numbers and get a single score and next step. Then you go deeper in your analytics stack.
Here’s a simple format that performs well on X/LinkedIn: “I scored our campaigns and found a hidden problem.” Post your Tracking Quality score before and after you standardize UTMs, plus one screenshot of the naming taxonomy you’re using. People love simple systems that reduce chaos.
If you want to go deeper, pair this with your CAC, LTV, and payback calculators to validate unit economics.
MaximCalculator builds fast, human-friendly tools. Double-check key decisions with your analytics stack and your business context. This calculator is a directional scoring model for learning and communication.