Attribution & Multi-Touch Analysis
A customer sees your Facebook ad, reads your blog, clicks a Google Search ad, then opens an email and buys. Who gets credit? Attribution is the hardest problem in marketing measurement — and the most important.
The retailer that nearly killed its best channel
A home furnishings retailer had been running marketing across six channels for two years. Their analyst reviewed the performance data and recommended cutting display advertising — "It generates almost no direct revenue," she reported. The channel showed 0.2% of attributed revenue in Google Analytics (last-click) while consuming 12% of the marketing budget.
The CMO paused before cutting. He asked a different question: what happens to the other channels' conversion rates when display is running vs. not running?
Three months of historical data told the story. Weeks when display spend was higher — reaching more of their website visitors with retargeting banners — the organic search conversion rate was 34% higher. The email open rate was 22% higher. Paid search ROAS was 41% higher.
Display wasn't converting people directly. It was keeping the brand visible between intent moments, warming up audiences who then converted through other channels.
Last-click attribution had hidden £340,000 in annual contribution. The analyst had nearly eliminated it.
(Illustrative scenario based on patterns common in marketing analytics. Specific figures are representative of real-world outcomes — not a verified account of a specific named company.)
The attribution problem
When a customer takes 30 days and 7 touchpoints to buy, crediting the final click ignores the entire journey that made the final click possible.
Last-click attribution says: Google Ads (branded) caused this purchase.
First-click attribution says: Instagram (where they first saw the brand) caused this purchase.
The truth: All six touchpoints contributed. The customer wouldn't have searched your brand name on day 21 without seeing the Instagram ad on day 1, reading the blog on day 4, seeing the retargeting on day 8, and reading the newsletter on day 14.
The attribution models
Last-click attribution
- 100% credit to the final touchpoint before conversion
- GA4 has this available but no longer uses it as default
- Bias: overstates bottom-of-funnel channels (branded search, retargeting, email)
- Understates: awareness channels (display, social, content)
- Use case: understanding which channels close sales
First-click attribution
- 100% credit to the first touchpoint
- Rarely used in modern analytics
- Bias: overstates top-of-funnel channels that introduced the customer
- Understates: channels that nurture and convert
- Use case: understanding which channels introduce new customers
Linear attribution
- Equal credit distributed across all touchpoints
- More honest about multi-touch reality
- Challenge: requires complete journey data (not always available)
- Use case: balanced view of channel contribution
Time-decay attribution
- More credit to touchpoints closer to conversion
- Logic: the ad clicked yesterday contributed more than the ad seen 4 weeks ago
- Reasonable for shorter sales cycles; less appropriate for long consideration
Data-driven attribution (DDA)
- Machine learning identifies which touchpoints statistically contribute to conversion
- GA4's default model for properties with sufficient conversion volume
- Most sophisticated; Google has progressively lowered DDA eligibility requirements — check Google Ads Help for current thresholds
- Bias: still limited by data visibility (can't see touchpoints outside Google's ecosystem)
- Best practice for most businesses with sufficient scale
The limits of every attribution model
No attribution model is complete. All of them work only with the data they can see — and they can't see:
Dark social: Someone sends a colleague your article via Slack or WhatsApp. The colleague converts through a direct URL. Attribution shows: "Direct." The Slack message is invisible.
Offline influence: A customer sees your billboard, hears your podcast mention, or reads an article about you. None of these appear in any digital tracking.
Cross-device: A customer researches on mobile at work, sees a retargeting ad on their home computer, and converts on their phone. Without identity resolution, this appears as three separate anonymous users.
Ad blocker gaps: Estimates suggest 20–30% of users in some markets block ads (per GlobalWebIndex and Statista estimates; figures vary by geography — Europe and tech-savvy audiences typically show higher rates), preventing platform pixels from tracking them — rates vary significantly by geography (higher in Europe) and audience type (much higher among technical audiences). Conversions from these users may be partially attributed or invisible.
The practical implication: Attribution data is a signal, not ground truth. Use it directionally — "this channel appears to contribute significantly" — rather than precisely. Cross-reference with incrementality tests for decisions involving major budget shifts.
Incrementality: the gold standard
Incrementality testing answers the question attribution models cannot: Would these customers have converted anyway, without this channel?
How an incrementality test works:
- Randomly split your audience into two groups: exposed (sees the channel) and holdout (doesn't)
- Run the campaign for the exposed group only
- Compare conversion rates between exposed and holdout
- The difference is the incremental lift — the customers who converted because of the channel
Example:
- Exposed group conversion rate: 8.4%
- Holdout group conversion rate: 6.1%
- Incremental lift: 2.3 percentage points (37.7% incremental)
- Attribution model had claimed 100% credit — but 73% of those conversions would have happened anyway
Incrementality tests are more complex and expensive than standard A/B tests — they require holdout groups and careful design. But for large budget decisions, they produce insights that attribution models cannot.
Using AI to run an incrementality analysis prompt: "I'm planning an incrementality test for [channel]. The channel currently drives [X] conversions/month at [Y] spend. Help me design a holdout test: what % of audience to withhold, how long to run it, what metrics to measure, and how to calculate incremental ROAS vs attributed ROAS."
There Are No Dumb Questions
"Our Google Ads, Meta Ads, and email all show more revenue than our actual revenue. Is there a formula to deduplicate?"
There's no formula, but there is a process. First, establish ground truth: your payment processor's revenue is real. Every other number is an attribution model's estimate. Second, calculate blended ROAS = actual revenue ÷ total marketing spend. This underpins all channel-level comparisons. Third, treat channel-level attribution as a directional guide: if Google Ads claims 60% of attributed revenue and email claims 30%, and blended adds up to 190% of actual, the true split is roughly 60/(60+30) = 67% Google, 33% email — but these are still estimates. Fourth, use incrementality testing for major allocation decisions where the stakes justify the investment.
"Should I spend time building multi-touch attribution or just use last-click?"
For most small businesses (under £50K/month marketing spend), last-click from GA4 with UTM tagging is sufficient for practical decisions. The marginal insight from sophisticated attribution models is not worth the setup complexity at this scale. For businesses spending £50K+/month, especially across 4+ channels, data-driven attribution in GA4 (if volume allows) or a dedicated attribution tool is worth the investment. The decision: how much are you potentially misallocating, and is the cost of better measurement less than that misallocation?
Map Your Customer Journey
25 XPPractical attribution for everyday decisions
For weekly campaign decisions: Use last-click or channel-level data in GA4. It's a rough guide but fast and consistent.
For budget reallocation decisions: Supplement with data-driven attribution (if volume allows) and cross-reference with actual revenue trend when each channel is scaled or paused.
For major strategic decisions (cutting or doubling a channel): Run an incrementality test before acting. The cost of the test is almost always less than the cost of the wrong decision.
For understanding new customer acquisition vs. retention: Use GA4's User Acquisition report (first-click perspective) for new customers; Traffic Acquisition for all sessions.
For cross-channel investment review: Build a blended view — total marketing spend ÷ total customers acquired — and break it by channel using a consistent attribution model. Don't compare channel A's last-click ROAS to channel B's first-click ROAS.
Back to the home furnishings retailer
The CMO's instinct to pause before cutting display advertising saved £340,000 in annual contribution — not because he trusted the channel, but because he asked a better question than the attribution model had been built to answer. Last-click attribution had looked at display and seen almost nothing, because display almost never closes a sale; it keeps the brand visible while the customer makes up their mind. Switching to a multi-touch view revealed that display was lifting conversion rates across every other channel by 22–41% — it was driving 40% of their pipeline from upstream, invisibly. The channel that looked like a budget drain was the glue holding the rest of the mix together. Attribution models don't lie, but they do have blind spots — and the most dangerous blind spots are the ones that look like clean data.
Key takeaways
- No attribution model is complete. Every model has biases and blind spots. Treat attribution as directional guidance, not ground truth.
- Last-click biases toward bottom-of-funnel. Channels that close deals (branded search, email, retargeting) are systematically overvalued; channels that build awareness (display, content, social) are systematically undervalued.
- Data-driven attribution is GA4's default. It's more sophisticated than last-click and worth using if your property has sufficient conversion volume — Google has progressively lowered eligibility requirements; check Google Ads Help for current thresholds.
- Incrementality testing is the gold standard. It's the only way to measure what would have happened without a channel. Use it for major budget decisions.
- Dark social and cross-device gaps are real. A significant portion of conversions are partially or fully invisible to digital tracking. Build this uncertainty into your decision-making.
Knowledge Check
1.A brand is considering cutting their blog content programme. GA4 last-click attribution shows blog content generates 3% of conversions despite consuming 20% of the marketing budget. An analyst recommends cutting it. What information is missing from this analysis?
2.A marketing team runs three channels: email (last-click attribution: 45% of conversions), paid search (35%), and display retargeting (20%). Total budget allocation mirrors these percentages. Why might this allocation be suboptimal?
3.A DTC brand conducts an incrementality test for their Meta retargeting campaigns. They expose 70% of retargeting-eligible users to ads (exposed group) and withhold 30% (holdout). Results: exposed group: 12.1% conversion rate; holdout group: 10.4% conversion rate. What is the incremental lift, and what is the true incremental ROAS?
4.A B2B SaaS company has a 60-day average sales cycle. A prospect sees a LinkedIn post on Day 1, downloads a whitepaper on Day 8 (via organic search), attends a webinar on Day 23, receives 4 nurture emails between Day 25–55, and converts on Day 61 via a direct URL. Under last-click attribution, how is this attributed, and what is the problem?