Quill's Thoughts

Turning reports into decisions with named owners, dates and acceptance criteria

This is where an operational platform decision, such as PharmaCare’s work with Manhattan Associates in ANZ, becomes relevant beyond the warehouse. Operations systems shape what data gets created, how reliably it is

DNA Product notes 17 Feb 2026 7 min read

Article content and related guidance

Full article

Turning reports into decisions with named owners, dates and acceptance criteria

Created by Matt Wilson · Edited by Marc Woodhead · Reviewed by Marc Woodhead

Turning reports into decisions with named owners, dates and acceptance criteria

Executive summary: PharmaCare’s operational streamlining in ANZ with Manhattan Associates is, on the surface, an operations story. Look a bit closer and it becomes a data story too: fewer workarounds, clearer process ownership, and better-controlled movement of information across channels. That matters because marketing and customer teams can only act on insight when the underlying operational truth is consistent.

The practical takeaway is simple. If you want dependable retail analytics insight in the UK from global or regional programmes, you need a repeatable model for data capture, governance, and activation. Otherwise you end up debating whose numbers are “right” instead of deciding what to do next, by when, and who owns it.

Context: why operational change has become a data issue

Retailers and consumer brands are being pulled in two directions at once. On one side, teams are asked to improve service levels, reduce costs, and keep regulators happy. On the other, they are expected to deliver personalisation, loyalty growth, and sharper measurement, often with the same headcount.

This is where an operational platform decision, such as PharmaCare’s work with Manhattan Associates in ANZ, becomes relevant beyond the warehouse. Operations systems shape what data gets created, how reliably it is captured, and whether it can be joined to customer activity. If inventory updates are late, substitutions are not recorded cleanly, or returns reasons are inconsistent, your customer analytics ends up guessing. That is how insight quietly turns into opinion.

There is also a governance angle. Compliance is not just a legal box to tick; it is a way of making your data trustworthy and auditable. A controlled process creates a trail you can defend. For marketing leaders, that means fewer debates about attribution and more confidence when you decide to suppress an audience, adjust contact frequency, or alter a loyalty offer.

What is changing: the operational move and the data knock-on effects

PharmaCare’s move to streamline ANZ operations and strengthen compliance can be read as a shift towards standardisation. Standardisation is not glamorous, but it is how you reduce “special cases” that only exist in spreadsheets and inboxes.

From a data and insight perspective, a few changes typically follow when an organisation tightens operational discipline:

  • Cleaner event capture: orders, picks, dispatches, returns, substitutions, and exceptions are logged consistently, making downstream analytics less brittle.
  • More reliable master data: products, locations, and pack configurations change less chaotically, improving reporting accuracy and forecasting.
  • Stronger controls: permissioning, audit trails, and defined workflows reduce the risk of unauthorised changes and untracked adjustments.

None of that automatically delivers marketing performance, but it creates the conditions where marketing can trust the data. That is the difference between “we think customers are churning” and “we can see churn by segment, confirm the causes, and test remedies with acceptance criteria”.

Implications: from reporting to decisions with owners and dates

When operations and compliance improve, insight teams often hit a short period of discomfort. Old reports break, definitions need rework, and historical comparisons get messy. That’s normal. The opportunity is to use the disruption to fix the long-running issues that block execution.

Loyalty and CRM analysis becomes more credible when it aligns with what actually happened operationally. If a customer was promised next-day delivery but experienced a delay, your analysis should be able to connect to that operational exception, not just speculate based on web activity.

Measurement discipline is the next knock-on effect. Marketing attribution is often treated as a puzzle to solve with more dashboards. In reality, it is a governance problem: agreeing definitions, data sources, and the decision cadence. If your plan has no named owners and dates, it is not a plan, fix it.

Finally, tighter compliance tends to sharpen consent and preference management. That reduces wasted spend and helps protect trust , but only if you are explicit about the source of truth for preferences, and how quickly updates propagate.

Single customer view: the bridge between ANZ operations and UK insight

It’s tempting to treat an ANZ operations programme as geographically “over there”, while UK teams focus on local trading and campaigns. The bridge is the customer record. If the organisation wants consistent insight and activation, it needs a single customer view that can incorporate operational truth from multiple regions and systems.

In practice, a single customer view is not a single table in a database. It is a set of agreed rules that answer questions like:

  • What identifiers are allowed, and which are reliable enough to join?
  • How do we handle households, shared emails, and loyalty cards used by more than one person?
  • Which system is the source of truth for preferences, and how quickly do changes propagate?

This matters because “trend” work is only as good as the entity resolution underneath it. If your “repeat buyer” segment includes duplicates, or your “lapsed” group includes customers who simply moved to another channel, you’ll optimise the wrong thing.

One external signal worth noting: Allied Market Research (via Yahoo Finance, 16 Feb 2026) projects continued growth in cloud storage through 2033. You don’t need that to justify storing more data, but it reinforces the direction of travel: volumes go up, sources multiply, and the cost of loose governance compounds.

Actions to consider: a pragmatic path to green

If you are looking at this and thinking “fine, but what do we do on Monday?”, here is a practical set of actions that keeps scope tight and decisions clear. Treat this as a path to green, not a wish list.

1) Lock definitions before you build dashboards

Pick 10\xE2\x80\x9315 metrics that actually drive decisions across marketing, trading, and operations. Define them in plain English, agree the data source, and name an owner. If you cannot name an owner, you have found a risk, not a metric.

2) Map operational events to customer outcomes

For the top journeys (first purchase, replenishment, returns, service issues), document the operational events that can influence customer behaviour. Then decide what you will measure and what you will ignore. This is where operational streamlining pays off, because event capture is more consistent.

3) Set acceptance criteria for the single customer view

Make it testable. For example: match-rate thresholds for email and loyalty ID, maximum propagation time for consent updates, and a defined approach to householding. Between two reporting cycles, rework anything that fails.

4) Build a lightweight governance cadence

One weekly 30-minute triage for data issues, plus a monthly steering session to approve definition changes, is often enough. Keep a change log for traceability. When someone asks why a number moved, you want an answer in minutes, not a two-week hunt.

5) Prioritise two experiments that use joined-up data

Choose experiments that depend on both operational and marketing signals, such as:

  • Suppressing promotional contact for customers with unresolved service exceptions, then measuring recovery in repeat rate.
  • Adjusting replenishment reminders based on fulfilment reliability by location, not just purchase interval.

Assign an owner and a date for each experiment, and write acceptance criteria upfront. If the experiment cannot be evaluated cleanly, it is not ready to run.

Risks and dependencies to surface early

  • Identity resolution complexity: shared identifiers and inconsistent customer capture can undermine insight. Dependency: data engineering capacity to implement matching rules and monitoring.
  • Consent and preference latency: if updates do not propagate quickly, you risk messaging errors. Dependency: integration patterns and clear source-of-truth decisions.
  • Operational change management: process changes can create temporary data discontinuities. Dependency: a joint plan with operations on when changes land and how reporting breaks are handled.

Yesterday, after stand up, a reporting ticket was blocked by a missing returns reason code mapping. A quick call with the operations owner cleared it. New date set. That is the pattern to aim for: surfaced early, fixed quickly, logged properly.

PharmaCare’s ANZ streamlining is a useful reminder that customer insight is only as dependable as the operational reality underneath it. If you want to turn that into reliable retail analytics insight in the UK, get the definitions nailed down, make the single customer view testable, and run a small number of experiments with clear owners and dates. If you want a joined-up data workshop, we’ll map your current state, agree acceptance criteria, and leave you with a path to green , cheers.

Take this into a real brief

If this article mirrors the pressure in your own workflow, bring it straight into a brief. We keep the context attached so the reply starts from what you have just read.

Related thoughts