Quill's Thoughts

AdTech and martech budgets may converge, but consent logic still fails in the handover

AdTech and martech budgets may converge, but consent handovers still break. A practical briefing on governance, lineage and safer activation in the UK.

DNA Playbooks 18 Mar 2026 8 min read

Article content and related guidance

Full article

AdTech and martech budgets may converge, but consent logic still fails in the handover
AdTech and martech budgets may converge, but consent logic still fails in the handover

A curious thing is happening in 2026: budget lines are moving closer together while operational logic is still split. Teams may buy media and manage customer journeys from neighbouring pots, yet the consent rules that govern those audiences often fracture at the handover. That sounds technical. It is also expensive. When activation teams cannot prove who approved what, for which purpose, and when that status last changed, audience quality drops and release cycles slow.

My view is blunt because it needs to be. A strategy that cannot survive contact with operations is not strategy, it is branding copy. The market is pushing adtech and martech into the same room, but the practical advantage will go to organisations that treat audience activation governance as a live operating discipline, not a policy PDF. The opportunity is real. So is the friction.

Signal baseline

The convergence story is credible. Large platforms have spent the past two years tightening links between media activation, first-party data, and CRM orchestration. You can see the direction in product roadmaps from Google, Salesforce, Adobe and major clean room providers through 2024 and into 2025: more shared identity handling, more connected audience pipelines, and more pressure on owned data to do work across channels. The commercial argument is obvious enough. Media costs stay stubborn, signal loss from browser and device changes has not magically reversed, and first-party audiences remain one of the few levers brands can genuinely shape.

But the market signal gets overstated when people assume budget convergence equals operational convergence. It does not. According to the UK GDPR framework overseen by the Information Commissioner’s Office, purpose limitation and lawful processing still depend on the context in which data was collected and how it will be used. A customer profile enriched for service delivery is not automatically ready for paid media activation. The same person may sit in both systems; the permissions and acceptable use cases may not.

There is a broader context here as well. According to the Office for National Statistics, UK personal well-being estimates continue to track differences in anxiety and life satisfaction by period and place. That matters less as a marketing metric than as a reminder that trust, pressure, and public sensitivity are not static conditions. If your customer communications feel misaligned with the relationship people think they have with you, response can turn quickly. To be fair, not every organisation will see that in a dashboard before it shows up in complaints.

What is shifting

The real shift is not simply budget alignment. It is the collapse of tolerance for opaque handovers. Platform teams are being asked to move faster while proving more. In a strategy call this week, we tested two paths and dropped one after the first hard metric came in. The tempting route was to unify audience creation first and let channel teams interpret permissions downstream. It looked neat. It failed the moment we mapped one segment into two activation destinations with different policy requirements and suppression rules.

The better route, if less glamorous, was to define activation lineage before scale. By activation lineage, I mean a traceable record of source system, consent state, transformation logic, approval owner, destination mapping and effective dates. That is the practical spine behind consent-aware segmentation. Without it, one audience can splinter into several unofficial versions, each with its own field mappings, exclusions and expiry assumptions.

There is a reason this handover keeps failing. Martech tools tend to think in customer journeys, service states and lifecycle triggers. Adtech tools tend to think in addressability, refresh cadence and destination-specific identifiers. Both are rational within their own world. The friction starts when teams assume shared labels mean shared meaning. A segment called “recent enquirers” might include support contacts in one system, web leads in another, and a lookalike seed in a third. The label survives. The governance does not.

I liked the first option, but the evidence favoured the second once the numbers landed. If release delays are already biting, there is a natural urge to standardise names and push on. My judgement is that this is where many programmes waste a quarter. Growth claims without baseline evidence should be parked until the data catches up.

Where the handover fails

Three break points show up repeatedly. The first is purpose drift. Data captured for account servicing, order updates or support resolution often enters segmentation logic by proximity rather than design. In regulated or high-trust sectors, that is where trouble starts. The second is field-level ambiguity. A consent flag may travel, but the definition behind it does not. “Opted in” is meaningless unless teams know the channel, purpose, capture point, timestamp and whether downstream systems preserve revocation at the same granularity. The third is ownership. No one is quite sure whether CRM, media, legal, or data engineering signs off the final audience state.

The problem is often visible in paperwork before it is visible in code. A plan looked strong on paper, then one dependency moved, so we re-ordered the sequence and regained momentum. The clue was a simple export file with six columns carrying audience rules and no line for consent provenance. Everyone in the room understood the targeting logic. No one could explain the permission logic in one sentence. That is usually the moment to stop pretending taxonomy alone will rescue the process.

An implied objection tends to come up here: surely customer data platforms and warehouse-native stacks already solve this. Sometimes they help, quite a lot. They do not remove the need for a working customer data operating model. Tools can store status and orchestrate events. They cannot settle unresolved policy interpretations or channel-specific exceptions without human ownership. As it stands, the winning pattern is not the stack with the most connectors. It is the one that can evidence decisions at handover speed.

Who is affected

Data leads feel this as rework and audit exposure. CRM managers feel it as segment drift, delayed sends and customer complaints that are hard to reconstruct. Activation specialists feel it most immediately: the audience is technically live, yet commercially shaky. Platform teams end up carrying the burden because every gap becomes an implementation problem once launch is close.

There is also a budget consequence. Convergence tends to promise efficiency, but weak handovers can create the opposite. If the same audience has to be rebuilt for paid social, search customer match, email suppression and on-site personalisation, each team starts maintaining its own local truth. That introduces duplicated QA, duplicated sign-off and duplicated risk. You do not need a dramatic compliance event for this to hurt. Slower campaign release and weaker match quality are enough.

Worth a closer look, too, is how this plays across local teams and business units. The Office for National Statistics publishes local authority and quarterly UK well-being datasets that show material differences by geography and time period. While those datasets are not marketing permissions data, they are a reminder that UK audiences are not one homogeneous block. Regional nuance matters operationally as much as creatively. The same should apply to consent rules, suppression logic and message eligibility where local governance differs.

Actions and watchpoints

The option set is fairly clear. Option one is cosmetic convergence: align reporting, tidy taxonomy, centralise some budget oversight, and leave permissions logic largely where it sits. That can ease planning this quarter, but the trade-off is recurring friction in delivery. Option two is governed convergence: define lineage standards, clarify approval ownership, and expose channel-specific permission logic before audience build reaches execution. This takes longer to stand up, yet it usually pays back sooner than teams expect because it cuts rebuilds.

If I had to defend the plan next week, I would pick the second path and stage it in four moves. Start with the handover artefacts, not the aspirational architecture. For the next 30 days, require every priority audience to carry source, purpose, consent state, refresh rule, destination, and owner. Then map where those definitions mutate across CRM and media workflows. In the following 60 to 90 days, standardise only the fields that repeatedly break release or proof. After that, automate propagation and suppression logic where the evidence is stable.

Two watchpoints matter. One is false certainty. A green consent flag is not proof of activation readiness if the destination interprets scope differently. The other is over-centralisation. Some variation between channels is legitimate because activation contexts differ. The aim is not one blunt rule for everything. It is a governed record of why the difference exists and who approved it. That small tension never fully disappears, and that is fine. Real operating models have edges.

DNA is built for this less glamorous but more valuable layer of work: turning fragmented signals into governed audiences, usable mappings and handovers that stand up under pressure. If your adtech and martech budgets are moving together while your consent logic is still crossing fingers at the point of activation, now is the time to fix the sequence. Contact the DNA team to review your current handover model and identify the next move that improves speed without weakening control.

If this is on your roadmap, DNA can help you run a controlled pilot, measure the outcome, and scale only when the evidence is clear.

Take this into a real brief

If this article mirrors the pressure in your own workflow, bring it straight into a brief. We keep the context attached so the reply starts from what you have just read.

Related thoughts