Full article

This is a delivery assurance note, not a sales pitch. We helped a major UK retailer move from reactive, high-friction consent handling to an auditable operating model that legal, CRM and delivery teams could actually use. At the start of 2025, a single data subject query could take two people the best part of a day, with evidence spread across three systems and too much guesswork in the middle.
The fix was not a shiny new platform. It was a minimum evidence pack for every consent change, suppression decision and campaign approval event, captured at source and attached to the customer and campaign records already in use. By July 2025, query resolution time had dropped from more than 48 hours to under 15 minutes. That is the bit that matters in data governance UK teams can test: clear owners, dated actions, acceptance criteria and a visible path to green.
Starting context
Work started in January 2025 after an internal audit near-miss in late 2024. The audit team requested the evidence trail for a cohort collected through a prize draw promotion. It took 48 hours to reconstruct the route from form tick-box, through a third-party agency database, into the client ESP. The evidence existed, but timestamps were inconsistent, approval records sat in email threads, and suppression reasons were not standardised. Risk was logged as significant.
The operational drag was just as important as the audit finding. Consent compliance operations had grown in the usual way: one form tool here, one spreadsheet there, and a sign-off process that lived mostly in inboxes. Marketing needed pace. Legal and data teams needed proof. Neither side was being awkward; the process was simply underspecified. No agreed checklist. No acceptance criteria. No named owner for the end-to-end trail. Sharp opinion, because it is deserved: if your plan has no named owners and dates, it is not a plan.
Our initial checkpoint was simple and testable. By 28 February 2025, the client needed an agreed definition of what constituted a complete evidence pack for three events: consent capture or change, suppression, and campaign approval. Owner: Head of Data, Aisha Khan. Acceptance criteria: one schema, one retention rule, and one routing decision for each event type. Bit tight on time, but doable.
Intervention design
We scoped the intervention in February 2025 and chose the least theatrical route. No replacement stack. No governance wallpaper. We defined the smallest bundle of records that would satisfy an audit query and support a defensible campaign decision. The output was a structured JSON evidence object created via API calls and event hooks at the point the action happened, then stored against the relevant customer or campaign record.
The minimum evidence pack had three parts:
- Consent record: UTC timestamp; capture source such as form ID
PZ101-WINTER-COMP; the exact version of the privacy notice and consent language presented; channel; lawful-basis indicator where relevant; and a pseudonymous technical marker such as hashed IP or device/session reference where justified. - Suppression record: suppression reason using a controlled list such as
customer_request,hard_bounceordeceased_flag; source of instruction such as unsubscribe link, service case reference or data feed; timestamp; and the system or user that executed the change. - Campaign approval record: campaign ID; final creative reference; audience or segment criteria; named approver; approval timestamp; and evidence that suppression rules were applied before deployment.
Each part had acceptance criteria. For example, a suppression decision could not be marked complete unless the reason code, source and execution time were present. A campaign approval record failed QA if it did not show both the approver and the segment version used. That sounds fussy. It is. Fussy is useful when someone asks six months later why a message was sent or blocked.
There was one design change worth admitting. I pushed early for a central dashboard. I was wrong about the effort and, frankly, wrong about the need. The data feed and legacy dependencies were trickier than expected, and a dashboard would have delayed the control we actually needed. Yesterday, after stand up, ticket REF-2025-03 was blocked by a legacy dependency. A quick call with the IT owner cleared it. New date set. We kept the architecture boring on purpose: push the evidence into the CDP and deployment tooling already used by operations, and keep a change log for traceability.
The suppression design also forced a harder conversation about sensitive records. The Office for National Statistics publishes weekly deaths data by region and related local authority datasets. We did not use those datasets to make individual suppression decisions, and that distinction matters. What they did do was sharpen the control requirement: where a deceased flag exists in a legitimate operational flow, the suppression action must be fast, attributable and auditable. Respect first, metrics second.
Delivery controls and ownership model
Good governance gets vague very quickly unless someone owns each move. We assigned explicit owners and dates. Aisha Khan, Head of Data, owned schema definition and storage rules by 28 February 2025. The CRM Operations Manager owned ESP and CDP field mapping by 31 March 2025. Legal owned notice-version control and approver rules by 11 April 2025. IT integration owned API event capture and failure logging by 31 May 2025. My role was delivery assurance: sequence the work, keep decisions written down, and stop scope creep pretending to be prudence.
We also set measurable checkpoints. By mid-April 2025, 95% of new form captures needed to produce a valid consent evidence object in test. By end of May, suppression events had to produce complete reason and source records in 99% of test cases. By go-live in July 2025, the business needed to resolve a sample consent query in under 20 minutes using only the operational systems and stored evidence pack, with no email archaeology. That was the practical exam.
Risk and mitigation were tracked openly. Main risks were legacy field mismatch, partner data inconsistency, and over-collection of technical markers. Mitigations were equally plain: controlled vocabularies for reasons, schema validation at ingestion, legal review of fields before release, and failure logging for missing records. Where assumptions existed, we wrote them down. Cheers to the boring change log; it saved arguments later.
Observed outcomes
Final integration completed in July 2025. The clearest result was the core operational measure: average time to resolve a consent query dropped from more than 48 hours to under 15 minutes. Analysts could retrieve consent history, suppression changes and campaign approval context directly from the CDP and deployment records. No one needed to chase old email threads to prove which version of a notice had been shown or who signed off a segment.
There was a second-order gain in deliverability and list hygiene. Once suppression reasons were standardised and captured at source, hard-bounce handling became more reliable and duplicate overrides reduced. By the end of Q3 2025, inbox placement across key segments improved by 1.5%. I would not oversell that as magic. It is what happens when bad records stop sloshing around the process.
The less visible outcome was confidence. A lot of what gets sold as trust architecture is compliance theatre: polished policy language and no usable proof. This programme replaced broad claims with timestamped records and a defensible approval trail. If legal asked why a campaign went out on a given date, there was an answer. If CRM asked why a customer was suppressed, there was an answer. Different teams, same evidence.
What we would change next
The obvious gap is third-party data. The control works well for first-party capture on the client's own properties. It is weaker when acquisition relies on partner competitions, agency-run forms or external ingestion files. That was a deliberate trade-off. We fixed the 90% we controlled directly before trying to solve the messy edge. I still think that was the right call.
Even so, the next phase needs stricter onboarding controls for partners. Between Q4 2025 and March 2026, I rewrote the acceptance criteria for partner onboarding, and tests passed once edge cases around date formats and notice-version references were covered. The updated plan is straightforward: partners must provide the same minimum evidence fields, ingestion must validate schema on receipt, and exceptions must route to manual review before records are released to campaign audiences.
Owner for that phase is Sarah Jones, Head of Partnerships. Date for first draft onboarding standard: 30 April 2026. Date for pilot with one partner feed: 31 May 2026. Acceptance criteria: 100% of pilot records carry source ID, notice version, consent timestamp and suppression mapping; failed records are quarantined with an exception log; and campaign teams cannot use partner records missing required evidence. That is the path to green. Not elegant, perhaps, but it will hold up.
Building a defensible audit trail is not a one-off compliance exercise. It is an operational discipline. If you want to make consent changes, suppression decisions and campaign approval easier to evidence without slowing delivery, Holograph can help you map the minimum evidence pack, assign the owners, and put dates against the work. contact Holograph and we will help you sort the plan properly.