Email Analytics & Optimisation
Open rates are just the beginning. A/B testing, funnel analysis, revenue attribution, and systematic optimisation — how to turn email data into better emails.
The A/B test that changed one business's email strategy permanently
In late 2022, a business coaching company was sending promotional emails for their £995 group coaching programme. Open rate: 24%. Click rate: 1.1%. Revenue per send: erratic.
Their email manager proposed a structured test. Same email content, different subject lines:
Version A: "Our Group Coaching Programme — Enrollment Open" Version B: "The 8 clients who grew their revenue 40%+ last quarter"
Version B: 51% open rate. Version A: 18%.
The insight wasn't just "Version B won." It was: their audience responded to results and stories, not programme descriptions. They applied this insight to every subsequent email, every landing page headline, every social post.
Revenue from their email list doubled in 90 days. Not because they sent more emails. Because one A/B test told them something true about their audience that they'd been missing.
(Illustrative scenario based on patterns common in email marketing. Specific figures are representative of real-world outcomes — not a verified account of a specific named company.)
The email metrics dashboard: what to track and why
Primary metrics (check after every send):
| Metric | What it means | Healthy benchmark |
|---|---|---|
| Open rate | % of delivered emails opened | 25–45% for engaged lists (note: post-2021 figures are inflated by Apple Mail Privacy Protection — use click-to-open rate as the more reliable benchmark) |
| Click-through rate (CTR) | % of delivered emails with at least one click | 2–5% |
| Click-to-open rate (CTOR) | % of openers who clicked | 10–20% |
| Unsubscribe rate | % who unsubscribed after opening | Under 0.3% |
| Spam complaint rate | % who marked as spam | Under 0.10% (Google's February 2024 guidelines set 0.10% as the warning threshold and 0.30% as the enforcement threshold; verify current thresholds at support.google.com/mail/answer/81126) |
| Bounce rate | % that failed to deliver | Under 2% |
Secondary metrics (check monthly):
| Metric | What it means |
|---|---|
| List growth rate | Net new subscribers per month (new minus unsubscribes) |
| Revenue per subscriber | Total revenue from email ÷ list size — measures commercial efficiency |
| Revenue per email | Revenue generated from a specific send |
| Conversion rate | % who completed the desired action (purchase, registration, download) |
The diagnostic hierarchy:
Use these diagnostics to identify where the funnel breaks. Low open rate is a subject line problem (or deliverability problem). Good opens with poor clicks is a content or offer problem. Good clicks with poor conversion is a landing page problem.
A/B testing: the systematic path to better emails
A/B testing (also called split testing) sends two versions of an email to two equal portions of your list and measures which performs better. The winner's result is applied to future emails.
What to test:
Subject lines (highest impact, easiest to test): Most email platforms let you test subject lines without touching the email body. Test: curiosity gap vs. specific benefit; short vs. long; question vs. statement; with first name vs. without.
Send time: Does your audience engage more at 7am or 12pm? Tuesday or Thursday? Your list may differ from industry benchmarks — test it.
Email opening lines: Test a story opening vs. a problem-statement opening vs. a statistic opening. Open rate doesn't capture this — test click rate and CTOR.
CTA copy: "Get the guide" vs. "Download now" vs. "Send me the template." Often a bigger difference than expected.
Email length: 200-word email vs. 400-word email. Some audiences want concise; others want depth.
Plain text vs. HTML: For newsletters and sequences: often the plain text version outperforms the designed version on open and click rate.
A/B testing rules:
-
Test one variable at a time. If you change both the subject line and the opening paragraph, you won't know which drove the result.
-
Statistical significance requires adequate sample size. With fewer than 1,000 subscribers per variant, results may not be reliable. For small lists: run directional tests and look for large differences (not marginal ones).
-
Define the success metric before testing. "Which has a higher open rate" or "which generates more revenue" — decide before you look at results.
-
Document and apply learnings. An A/B test that sits in a spreadsheet and is never referenced again is wasted effort. Build a testing log that captures hypothesis, result, and implication for future emails.
Using AI for A/B test interpretation: "Here are the results of my email A/B test: [Version A details + results]. [Version B details + results]. What hypothesis does this result suggest about my audience? What should I test next to validate the hypothesis? How should I apply this to future emails in my sequence?"
Design an A/B Testing Programme
25 XPRevenue attribution: connecting email to business outcomes
Most email marketers know their open rate. Far fewer know how much revenue their email programme generates.
Attribution models:
Last-click attribution: Revenue is attributed to the last marketing touchpoint before purchase. If a subscriber clicked an email link and purchased 10 minutes later, 100% of the credit goes to that email.
First-click attribution: Revenue is attributed to the first marketing touchpoint. If the customer originally discovered the brand through a Google search, 100% of the revenue goes to SEO — even if they were in your email list and a promotional email triggered the purchase.
Linear attribution: Revenue credit is distributed equally across all touchpoints in the customer journey.
For most businesses: Last-click attribution is the easiest to implement with email platforms and gives email its fair due — the email is often the direct trigger for the purchase decision.
Setting up revenue attribution:
E-commerce platforms: Klaviyo, Mailchimp, and most major email platforms integrate directly with Shopify, WooCommerce, and BigCommerce to track which emails generated which purchases automatically.
Other businesses: Use UTM parameters on every link in every email, and track conversions in Google Analytics. The UTM parameters identify the email as the traffic source; a goal conversion in Analytics (form submission, purchase, sign-up) attributes the outcome to the email.
Improving the metrics that matter most
Improving open rate:
The five highest-impact levers:
- Better subject lines (the primary driver)
- Send time optimisation (test your audience specifically)
- List hygiene (remove inactives who drag down averages)
- Deliverability improvement (emails in spam can't be opened)
- "From name" test (the sender name, tested alongside the subject line)
Improving click-through rate:
- Single, clear CTA (one direction instead of multiple competing options)
- CTA placement above the fold (don't make mobile readers scroll to find it)
- Button copy that specifies the benefit (not "click here")
- Email-to-landing page alignment (the email promises X; the landing page delivers X)
- Personalisation of the offer to the segment (relevant > generic)
Improving conversion rate (post-click):
- Landing page headline matches email CTA copy (no surprise at the click destination)
- Single, focused landing page (no navigation to distract from conversion)
- Reduce form fields (fewer fields = more completions)
- Social proof above the fold (testimonials, customer counts, press mentions)
- Mobile-optimised landing page (over 50% of email clicks happen on mobile)
Email Performance Review
25 XPBack to the business coaching company
The business coaching company didn't just find a better subject line — they discovered something true about how their audience thinks. Results and stories outperform programme descriptions because buyers aren't searching for products; they're searching for proof that transformation is possible. That one insight, validated by a single A/B test, changed the template for every future email, every landing page headline, and every social post. Revenue from their email list doubled in 90 days without sending a single extra email. That's the compounding value of systematic testing: one test, correctly interpreted and consistently applied, is worth months of effort optimising the wrong things.
Key takeaways
- The diagnostic hierarchy starts at open rate. Low open rate = subject line or deliverability problem. Good opens with poor clicks = content or offer problem. Good clicks with poor conversion = landing page problem.
- A/B test one variable at a time. Testing multiple variables simultaneously makes it impossible to attribute the result to the right cause.
- Document and apply test results. A/B tests are investments in audience knowledge — the insight is only valuable if it's applied to future emails.
- Attribution requires UTM parameters. Without tagging email links, revenue attribution in your analytics is invisible. Set up UTMs on every link in every email.
- Optimise your own benchmarks, not industry averages. A 20% open rate for a niche B2B list may outperform a competitor with a 35% open rate on a consumer list. Your audience's behaviour is the relevant baseline.
Knowledge Check
1.An email has a 41% open rate but a 0.8% click rate. The list is healthy and deliverability is strong. Where is the funnel breaking and what should the email marketer investigate?
2.A marketer A/B tests two emails: Version A changes the subject line AND the email opening paragraph; Version B keeps both the same as the previous send. Version A has 12% higher open rate. What is the flaw in this test design?
3.An e-commerce brand's email platform shows: 45,000 emails sent, 38% open rate, 3.2% click rate. Their Google Analytics shows 0 sessions attributed to email this month. What is the most likely explanation?
4.A newsletter's click-through rate is 0.6% despite a 39% open rate. The newsletter contains 8 different linked sections: a main article, 3 quick links, a book recommendation, a tool recommendation, a sponsor mention, and a personal question. What is the most likely cause of the low click rate?