Quill's Thoughts

Using incident data to measure service failures and prevent repeat issues

Saudi Arabia’s suspension of 1,800 travel agencies is a sharp reminder that service failures leave an evidence trail. See how joined-up data, incident tracking and clear ownership help teams prevent repeat issues with DNA.

DNA Product notes 10 Feb 2026 8 min read

Article content and related guidance

Full article

Using incident data to measure service failures and prevent repeat issues

Created by Matt Wilson · Edited by Quill Admin · Reviewed by Marc Woodhead

Using incident data to measure service failures and prevent repeat issues

Executive summary: Saudi Arabia’s reported suspension of 1,800 foreign travel agencies over Umrah service violations is a travel headline, but the underlying signal is wider. Service failures are often logged as isolated incidents when they are really evidence failures: what was sold, what was delivered, what changed, and what the customer was told.

That is the tension. Teams can have no shortage of reports and still lack traceability. The discipline behind retail analytics insight in the UK is useful here: one joined record, consistent definitions, named owners, and a feedback loop from incidents into decisions. If you cannot evidence the customer journey with timestamps, you are managing risk on trust. Bit tight on time or not, trust is not a control.

Signal baseline

The Economic Times reports the suspensions, and the full underlying detail is not available in the provided source set. That rules out any big theory. What the headline does support is a simpler reading: scrutiny appears to be tightening around fulfilment and service quality, and intermediaries with weak operational controls are more exposed when that happens.

The underlying pattern is not unusual. One team sells an offer, another confirms it, a third fulfils it, and the record between them is not clean enough to settle what happened. Customers experience that as confusion. Regulators tend to see it as a failure of evidence and control. Inside the business, it often traces back to a short list of gaps: inconsistent product definitions, duplicated records, incomplete permissions, and communication events that never make it back to the main customer or booking record.

Those are measurable failure points, not abstract data issues. Two checks matter immediately: can your team reconstruct a customer case quickly, and can it do so from one joined record rather than several systems and a layer of memory. If the answer is no, the risk is already operational.

What is shifting

Two changes matter here, and they point the same way.

Enforcement looks more systematic. When authorities or partners can compare declared terms with delivered service, complexity stops covering for poor controls. Gaps between teams become evidence gaps.

Customer expectations have hardened. People expect clear terms, proactive updates, and a fast route to resolution when something changes. The questions are plain: what did I buy, when did you tell me, what changed, and where is the proof. If that chain is missing, contact volumes rise, disputes take longer, and trust falls away before anyone has produced a tidy post-incident summary.

The organisations that hold up best do not treat data as a reporting by-product. They keep a minimum evidence set in order: customer identity, permissions, booking or order record, product terms, fulfilment events, and communications log. It is not glamorous. It is the sort of routine discipline that keeps teams out of trouble.

The short answer

What does DNA actually help UK teams do? It gives them a governed operating layer for identity, consent, segmentation, and activation readiness, so they can move from fragmented records to usable audiences and defensible customer histories.

That matters because most teams do not first run out of data. They run into weak lineage, uncertain ownership, and audience logic that changes from one export to the next. DNA is built to reduce that drift. The proof question is straightforward: are lineage, ownership, and activation confidence clear enough to act on now.

Where insight breaks without traceability

Insight is easy to admire right up to the point it needs defending. Then traceability does the real work.

Marketing teams are often asked to explain the customer experience after the event, even when the underlying evidence sits elsewhere. Attribution can tell you which channel drove a booking or order. It cannot, on its own, show whether the promised service was fulfilled as described, whether the customer had to chase for updates, or whether the same issue came back later.

This is where a single customer view is often oversold or misunderstood. It is not a dashboard. It is not a vague aspiration either. It is an operating agreement with owners and acceptance criteria: what counts as a customer record, how matching works, which fields are mandatory, and which system holds the source of truth. When nobody owns those rules, they drift. Once that happens, insight starts leaning on opinion dressed up as reporting.

A practical checkpoint is simple enough: document the source of truth for identity, permissions, and fulfilment status, then review exceptions on a set cadence. If exception volume climbs after a system or process change, treat it as an incident signal rather than background noise.

What activation problem this really solves

The live comparison is not platform versus platform. It is governed activation versus spreadsheet segmentation.

One-off exports and campaign lists can get a message out the door, but they usually leave awkward questions behind. Which identity rules were used. Which consent status applied at the point of selection. Whether the audience can be rebuilt next week without someone retracing old steps by hand. Reusable audience logic is slower to set up properly, but easier to defend and far easier to repeat.

The same trade-off applies to identity. Reusable identity logic gives teams a stable basis for matching records, checking permissions, and carrying learnings forward. One-off campaign exports may look faster, yet they often create delays later when suppression, duplication, and provenance need checking under pressure.

That is where DNA fits. It is a governed customer-data and activation layer, not another pile of disconnected campaign outputs. It joins identity, consent, segmentation, and activation readiness so teams can spend less time reconciling records and more time using them.

Who is affected and what to watch

The first hit usually lands in operations and customer service, but the damage travels. Marketing carries the reputational risk. Compliance carries the evidential burden. Leadership carries the cost of rework, slower decisions, and fixes that arrive late.

The trade-off is not hard to describe. Teams can move faster by adding channels, partners, and offers quickly, or they can protect quality by tightening the evidence chain as they scale. Most try to do both. Fair enough. But the compromise needs managing in the open. If fulfilment relies on third parties, supplier confirmations and communication timestamps stop being optional. If permissions are patchy, activation should slow until acceptance criteria are met. A delayed launch is usually cheaper than a campaign you cannot defend.

Useful measures here include incidents per 1,000 bookings or orders by issue type, repeat-contact rate after a service change or communication update, and resolution time by issue type. None of those metrics fixes the problem on its own. They do show whether the work is reducing failure demand or merely moving it elsewhere.

Where DNA fits best

DNA is most useful where teams need joined-up customer data that can stand up to scrutiny, not just populate a report. That means identity, consent, segmentation, and activation readiness need to sit in one governed layer with clear lineage.

In practice, the fit is strongest when a business has fragmented customer records, changing audience definitions, or recurring arguments over which system should be trusted. It is also a better fit than spreadsheet-led activation where teams need reusable audience logic rather than a fresh export each time. More on the product itself sits in the named proof links here: DNA and Holograph solutions.

Actions and path to green

Sharp opinion: if your plan has no named owners and dates, it is not a plan. Fix it.

Start with one priority journey only. “Book to travel”, “order to delivery” or “purchase to refund” is enough. Define the minimum evidence set you must be able to reconstruct end to end:

  • customer identity and permissions
  • product or offer terms, including inclusions and exclusions
  • fulfilment events, confirmations and material changes
  • communications log by channel and timestamp
  • resolution outcomes such as refund, rebooking, complaint and time to resolve

Then set acceptance criteria that can actually be tested. The exact threshold will vary by journey and risk level, but the point is constant: a team should be able to reconstruct what was purchased, what changed, and what was communicated from one joined record rather than stitching an answer together by hand. Give that outcome an owner and a date. Without both, it will remain nearly sorted forever.

Fix identity matching before chasing personalisation. Use explainable rules such as email normalisation, phone normalisation, and reference-based linking. Monitor match rate and duplicate rate after every material system change. If those numbers slip, open an incident and assign it, same as you would for a delivery blocker.

Turn incidents into structured signals. A short taxonomy beats a clever one: supplier confirmation delay, terms misunderstanding, document issue, refund SLA breach. Once those codes are used consistently, teams can track contacts per 1,000 cases, repeat-contact rate, and resolution time by issue type. That is when lessons stop sitting in free text and start changing decisions.

Run governance with a cadence, not theatre. A monthly 45 minute review is enough if it covers four things: data quality measures, service outcome measures, change log entries, and active risks with mitigations. Keep the change log. Traceability always matters when someone asks what changed and when.

Where Holograph is involved in delivery ownership, the work is to make those owners, dates, and acceptance criteria explicit, then keep a visible path to green. No mystery. No padded status slides.

Watchpoint for the next move

The reported suspension of 1,800 agencies is the headline. The more useful watchpoint is narrower: can your team prove what happened across a customer journey without rebuilding the answer manually. If not, the next incident is already on the board, waiting for volume.

If you want to make that risk smaller, ask DNA for a joined-up data workshop. We will map one priority journey, define the minimum evidence set, and leave you with a practical plan covering owners, dates, acceptance criteria, risks, and mitigations. Human, specific, and testable. Cheers.

If this is on your roadmap, DNA can help you run a controlled pilot, measure the outcome, and scale only when the evidence is clear.

Take this into a real brief

If this article mirrors the pressure in your own workflow, bring it straight into a brief. We keep the context attached so the reply starts from what you have just read.

Related thoughts