Full article

A UK retailer came to us with a familiar problem: plenty of data, not much joined-up reporting. Loyalty, e-commerce and in-store EPOS data all existed, but answering a straightforward commercial question still took days of manual work. That made campaign optimisation slow and board reporting thinner than it should have been.
This delivery assurance note sets out the before-and-after state, the trade-offs we made, and the measurable result. The aim was simple enough: give marketing and leadership one trusted view of customer behaviour, then move reporting on from opens and clicks to KPIs the board could actually use.
Starting context
Before the work began in January 2025, the marketing team, led by Head of Marketing Sarah Jennings, relied on exports from three separate systems to piece together customer behaviour. A report on loyalty members who also bought online took about three working days to compile. That is not a reporting rhythm; it is a queue.
The operational risk was clear. By the time a report was built, checked and circulated, the trading window had often moved on. One example stood out: analysis of a weekend promotion aimed at first-time online buyers was not ready until the following Thursday. The issue was not simply delay. It meant the team missed the most useful point for follow-up action, so the campaign learning arrived after the decision point had passed.
Board reporting had the same limitation. Directors saw campaign metrics such as open rates and click-throughs, but they did not have a reliable cross-channel view of customer lifetime value, segment profitability or customer acquisition cost. External context existed, but it could not be connected to internal behaviour. The Office for National Statistics quarterly personal well-being series can help frame broad shifts in sentiment across the UK, but it is not customer-level evidence. The retailer could see the weather map, so to speak, but not what it meant for its own highest-value segments.
Intervention design
We started on 15 January 2025 with a tight brief: create a trusted single customer view as the reporting foundation for marketing and commercial teams. The delivery model was phased on purpose. When time is a bit tight, scope discipline matters more, not less. Each phase had a named owner, a date and acceptance criteria. If your plan has no named owners and dates, it is not a plan, fix it.
Phase one ran to 31 March 2025 and focused on loyalty plus e-commerce data. Ben Carter, the client’s Head of Data, owned the workstream. Acceptance criteria were explicit: one dashboard, refreshed every 24 hours, showing combined spend and order history at customer level across both sources. The main risk was duplicate records, so the mitigation was a defined identity hierarchy using the loyalty card number as the primary matching key.
Phase two covered EPOS integration and had a deadline of 1 June 2025, owned by IT Lead James Fisher. This was the more awkward part because the store estate relied on legacy structures. We planned a two-week cleansing sprint to bring the data onto a path to green before full ingestion. Yesterday, after stand-up, ticket DATA-113 was blocked by a legacy API dependency. A quick call with James cleared it. New date set: end of day. That is not glamorous, but it is how delivery stays honest.
Observed outcomes
By July 2025, the unified model was live and being used in routine reporting. The first measurable gain was speed. Cross-channel customer behaviour reporting dropped from roughly three days to under two hours. That gave Sarah’s team the same-day visibility they had been missing.
The second gain was decision quality. With the data joined up, the team identified a segment of high-value loyalty customers who had not purchased online in the previous 90 days. A reactivation campaign for that cohort was planned, approved and launched in a single afternoon. Within two weeks, the targeted segment delivered a 12% sales uplift. Not magic, just better timing and a clearer audience definition.
The data also exposed a cross-channel behaviour pattern that had been hidden. Customers buying from one in-store product category were 40% more likely to become high-value online shoppers within six months. That finding changed the merchandising approach: stores promoted the online range more deliberately to customers in that category. Board reporting improved because the questions improved. Instead of reviewing email metrics in isolation, leadership could see CLV:CAC ratios by channel and cohort retention in one place. The practical shift was from asking, ‘How did this campaign perform?’ to asking, ‘Which customer groups are growing profitably, and where is the risk?’ That is the sort of retail analytics insight for UK brands boards can use without a translator.
Operational trade-offs and controls
There were trade-offs. A 24-hour refresh cycle was chosen over near real-time updates in the first release. That was deliberate. Daily data was enough for the board use case and most marketing decisions, while keeping delivery risk and cost in check. We could prove trust first, then decide whether lower latency was genuinely worth it.
We also kept external signals in the right place. ONS well-being datasets can help teams sense broader context, but they are not a substitute for first-party behaviour. Used properly, they support planning hypotheses. Used badly, they create stories the customer data cannot support. The control here was simple: every strategic claim needed a source, an owner and a checkpoint date for review.
What we would change next
The technical delivery landed well, but the adoption model needed more weight from day one. We trained users on the dashboards, and that covered the mechanics. What it did not fully cover was the shift in working style: from pulling reports on request to forming a hypothesis, testing it and acting on the result.
If we ran the same engagement again, we would start an Adoption and Value Realisation workstream alongside the data build, with a commercial owner named before phase one closes. The acceptance criteria would be practical and testable: number of decisions made using the new reporting and time from question to action. Between 10:00 and 12:00 in one later session, I rewrote the acceptance criteria for the board reporting story; tests passed once edge cases around merged customer records were covered. Slightly less dramatic than a boardroom reveal, but far more useful.
For teams trying to move from campaign reporting to board reporting, the real question is not whether more data exists. It usually does. The question is whether your reporting model can connect customer behaviour and commercial outcomes without three days of spreadsheet archaeology. If you want to see what that could look like in your own estate, have a word with DNA Connect about a joined-up data workshop. We will help you map the owners, risks and acceptance criteria properly, so the next reporting cycle is grounded in evidence rather than guesswork.
If this is on your roadmap, DNA can help you run a controlled pilot, measure the outcome, and scale only when the evidence is clear.