Full article

UK utility firms already hold enough service data to make support-led cross-sell commercially useful. The surprise is that the blocker is rarely model performance. It is permissioning, field provenance and whether anyone can explain why a segment was built in the first place. That sounds mundane. It is also where projects either compound value or quietly stall.
As it stands, the decision is not whether to use AI in service journeys. Many teams already do, from chatbot triage to case-routing and next-best-action prompts. The live question is narrower and more valuable: which support signals can move into cross-sell, under what consent logic, and with what controls. My view is simple. A strategy that cannot survive contact with operations is not strategy, it is branding copy. For UK utilities, that points towards a governed, staged model over a fast but brittle land-grab.
Quick context
The immediate choice is between two operating paths. Path one uses AI support outputs as a broad source of cross-sell propensity, pushing more service events, case notes and behavioural signals into marketing audiences quickly. Path two limits early use to a smaller set of operationally clear, permission-aware signals such as tariff enquiry, home move, payment method change, meter upgrade interest or service plan questions. In a strategy call this week, we tested two paths and dropped one after the first hard metric came in. The wider path looked attractive until the team mapped consent states and found three different definitions across CRM, contact centre tooling and outbound channels.
That is the real decision model. Not “can AI infer intent?” but “can the business prove lawful, accurate and operationally usable intent at the point of activation?” In UK utilities, where billing, complaints, service continuity and vulnerability considerations sit close to the customer record, a weak customer data operating model creates more rework than growth. DNA’s role here is practical. It turns fragmented service and marketing signals into governed audience logic, with traceable rules for segment entry, suppression and destination use.
A comparative view of the options
The useful comparison is not AI versus non-AI. It is broad inference versus constrained activation. Broad inference usually promises more volume. Constrained activation usually delivers cleaner execution in the first 90 days. I liked the first option, but the evidence favoured the second once the numbers landed. When teams cannot reconcile a service event to a current permission state and a destination-specific rule, segment throughput slows and confidence drops.
| Option | Upside | Main constraint | Best fit |
|---|---|---|---|
| Broad AI-led segment creation from many support signals | Higher theoretical reach and more intent hypotheses | Consent ambiguity, weak field lineage, harder QA before activation | Mature organisations with unified governance and audit trails |
| Constrained segment creation from approved service events and declared interests | Faster approvals, lower rework, easier destination mapping | Lower initial volume and fewer lookalike assumptions | Most UK utilities starting support-led cross-sell |
There is a decent precedent in adjacent activation work. Holograph’s campaign deployments have shown that measurable uplift tends to come from disciplined system design rather than maximal complexity. For example, a GetPRO Campaigns campaign with Tesco and Co-op reported a 43% uplift in email sign-ups. Different category, to be fair, but the lesson carries over: when logic is explicit and activation pathways are governed, teams move faster with fewer mistakes.
Operational pitfalls
The practical impact shows up in three places: data handling, approval flow and segment performance. If service AI generates labels such as “move home likely” or “EV charger interest”, the team needs a visible chain from source event to derived attribute to outbound platform. That is where activation lineage stops being abstract. It is the record of how a segment came to exist, who approved it, which consent logic applied and where it was sent.
Without that chain, operational friction appears quickly. A plan looked strong on paper, then one dependency moved, so we re-ordered the sequence and regained momentum. In this case, the dependency was simple: the contact centre platform stored interaction outcomes one way, while the CRM stored product eligibilities another. The segment definition passed QA in one system and failed in another. That kind of mismatch is common, and it is expensive in staff time. It also means campaign timing slips, often by days rather than hours, which matters if the intended trigger was a recent service event.
Context can distort intent, too. A cold snap this week, with Sunderland in Cumbria sitting at around -1°C, is the sort of real-world pressure that can spike service contacts about heating or billing. An AI model may detect urgency or product interest, but governance must decide whether the context makes that signal unsuitable for cross-sell. The model can classify; governance decides if acting is sensible.
A checklist for safe activation
To move forward, use declared or clearly inferred service intents only where three conditions hold: the source field is stable, the permission logic is current, and the activation destination has mapped rules for inclusion and suppression. Build the first wave around use cases with obvious commercial timing, like home moves or tariff reviews.
Consider this short checklist:
- Identify approved signals: Start with 3-5 high-value, low-ambiguity events such as tariff reviews or home moves.
- Map consent and exclusions: Document exact consent flags and suppression rules, e.g., for recent complaints or vulnerability flags.
- Define destination logic: Ensure rules are clear for each channel, like email or paid media.
- Test one path end-to-end: Run a controlled pilot to validate data flow, governance, and commercial outcome.
- Establish a baseline: Measure reach, conversion, and exception rates before expanding.
Closing guidance
Commercially, a constrained-first model is not the loudest route, but it is the one most likely to create defendable value first. It reduces approval drag, cuts rework, and gives leadership a baseline within one planning cycle, usually 8 to 12 weeks. The tension is that starting narrow makes expansion tempting before governance catches up. That pressure rarely disappears.
If you are weighing service data, permissioning and segment use, map the option set properly and test one governed segment family end to end. DNA is built to turn fragmented records into traceable, usable activation flows. To see how this could work for your utility, contact the team to design your first proof pack and next deployment step.
If this is on your roadmap, DNA can help you run a controlled pilot, measure the outcome, and scale only when the evidence is clear.
Proof and original case study
This interpretation draws on a public Holograph case study. For the original source detail, see kosmos.software, kosmos.software, the original Holograph case study and more Holograph case studies.