Full article
Overview
Promotion teams rarely lose control all at once. It goes in small, expensive increments: a barcode that proves product type but not purchase timing, a receipt image that a human can read but a rules engine cannot, a manual review queue that swells every weekend and then quietly becomes normal. By the time finance asks why redemptions look odd, the weak points are already built into the campaign.
The good news is that this is usually a systems problem, not a consumer problem. Treat proof of purchase verification as an operating model rather than a box-tick at claim submission, and control comes back surprisingly quickly. The trade-off is plain enough: too little friction invites weak claims, too much in the wrong place pushes honest customers out. The job is to build the boring middle properly.
Quick context
Last Thursday, in a meeting room just off Farringdon, a stack of receipt samples landed on the table with the soft thud of trouble. Some were crisp supermarket prints, some were blurred mobile photos taken under kitchen lighting, and one looked as if it had been shot through a steamed-up conservatory window. That is when I realised, again, that most promotions do not fail because people are especially crafty. They fail because the claim journey was built around collection, not verification.
In practice, teams lose control of purchase evidence in four places.
First, they confuse identifiers. A product barcode can confirm that a SKU exists, but not who bought it, when they bought it, or whether the same item has already been used in a previous claim. Receipt data can confirm transaction context, but only if the image is captured clearly enough and parsed into usable fields. Different signals, different jobs. Lumping them together creates blind spots.
Second, controls are added late. A campaign gets approved, creative is shipped, retailer terms are agreed, then someone asks how duplicate claims will be stopped. At that point the architecture is fixed, and every extra rule feels like a patch. Bit of a faff, and usually an expensive one.
Third, teams rely on manual intervention for edge cases that are not edge cases at all. Blurry receipts, partial barcodes, mismatched store names and reused transaction numbers are common patterns. If your process treats them as occasional exceptions, the review queue becomes the real workflow. The UK Government Digital Service design principles are useful here: make the hard parts simple for users and staff alike. If reviewers are acting as a translation layer between weak inputs and business rules, the service design is unfinished.
Fourth, nobody agrees what “good evidence” means. Legal wants compliance, marketing wants conversion, operations wants throughput, and customer care wants fewer complaints on a Monday morning. Without a shared evidence standard, the campaign drifts.
A tighter frame is to think in terms of evidence design. Decide before launch what proves eligibility, what checks should run automatically, what needs human review, and what should be rejected at source because it will never become trustworthy later.
Step-by-step approach
The strongest teams build purchase-evidence control in layers. Not glamorous, but then neither is untangling a duplicate-claim issue three days before a prize draw.
1. Define the evidence contract before creative goes live. Write down the minimum viable evidence for a valid claim: one eligible barcode, one readable till receipt, purchase within campaign dates, retailer within scope, and one claim per qualifying purchase event. Be exact about what counts as readable. If date, retailer name and transaction reference are mandatory, say so in the journey and in the rules.
Between 09:00 and 11:00 one Tuesday, I reviewed three campaign flows where “upload your receipt” was the whole instruction. In practice, that pushed the interpretation burden downstream to operations. We fixed it with a simple hack: an on-screen capture guide showing the full receipt length, mandatory fields and acceptable crop margins. Completion rates held steady in the next test cycle, while reviewable images improved enough to cut avoidable manual touches. Not magic. Just less ambiguity.
2. Separate product eligibility from transaction evidence. Use barcode and receipt controls for distinct purposes. The barcode confirms the pack is in scope. The receipt confirms the purchase event is in scope. When one is forced to stand in for the other, loopholes appear. A valid product code on an old pack is not evidence of a valid promotional purchase. Equally, a valid receipt without reliable line-item evidence may not prove the promoted product was bought.
This distinction is worth keeping because GS1 barcode standards are built for product identification and supply-chain consistency, not as a complete consumer purchase-proof framework. Too many promotions ask a product code to do a receipt’s job.
3. Score claims, do not just accept or reject them. Binary logic is tidy in a spreadsheet and messy in the real world. Better to assign confidence bands.
If a platform cannot explain its decisions, it does not deserve your budget. A confidence-based workflow gives teams a defensible path for every outcome. It also shows where poor capture, rule ambiguity or abuse is actually concentrated. That is useful operationally, and rather better than guessing over a lukewarm cup of tea.
4. Instrument the journey like a product team. Track where evidence quality degrades. Measure image upload failure rate, percentage of claims requiring manual review, duplicate transaction matches, resubmission success and time-to-decision. These are not vanity metrics. They tell you where control is being lost.
A practical benchmark many teams use is manual review rate. If a campaign starts with 8 to 12 per cent manual handling and climbs after influencer or paid social bursts, that is a signal that acquisition quality and evidence quality are out of step. Fine if planned. Costly if not.
5. Build privacy-preserving checks first. Promotion evidence often contains names, partial card details, store locations and shopping habits. UK GDPR is clear on data minimisation: collect only what you need for the stated purpose. In practice, that means extracting the fields needed for eligibility checks, limiting retention, and avoiding a sprawling archive of full receipt images unless there is a justified operational or legal basis. Automation without measurable uplift is theatre, not strategy.
- High confidence: eligible barcode matched, receipt fields parsed, date in range, no duplicate transaction indicators.
- Medium confidence: barcode matched, receipt image readable but one field uncertain, claim held for light-touch review.
- Low confidence: mandatory data missing, duplicate pattern detected, or image quality below threshold; request a re-upload or reject.
Pitfalls to avoid
The most common failure mode is over-trusting one signal because it feels objective. Barcodes look precise, so teams assume they are strong evidence. Receipt images look human-readable, so teams assume they will survive at scale. Neither assumption holds up consistently.
There is a wider lesson in adjacent evidence systems. On 10 March 2026, The Fifth Skill wrote about protecting businesses from point-of-sale malware attacks, pointing to the risk of tampered transaction environments and the need for layered controls around capture, transmission and interpretation. Different domain, same underlying truth: a transaction record is only as trustworthy as the controls around it.
A second pitfall is designing for best-case claimant behaviour. People upload receipts in dim kitchens, on train platforms and in car parks while balancing a meal deal and a child’s school bag. Fancy that. If the flow depends on perfect framing, perfect focus and perfect patience, your controls are theoretical. Build for ordinary behaviour instead. Prompt for retake when blur is detected. Show the exact missing field. Save progress if the connection drops.
Third, avoid vague duplicate logic. “One claim per person” sounds sensible until households share devices, families use the same Wi-Fi, and multiple valid purchases happen in a week. Duplicate controls should combine signals rather than rely on one blunt identifier. Transaction number, timestamp window, product match, claimant history and household logic together are more useful than an IP address alone.
Fourth, be careful with manual review criteria. Reviewers need a decision rubric, not vibes. If one operator accepts a shortened retailer name and another rejects it, outcomes become arbitrary. That drives complaints and weakens auditability. The Information Commissioner’s Office guidance on accountability and automated decision-making is a sensible reference point here, even for relatively low-stakes promotions.
The trade-off is speed versus consistency. A looser review process can clear queues faster in week one. It usually creates appeals, reversals and data noise later. Better to spend an extra half day tightening the rubric and save several days of mess downstream.
Checklist you can reuse
If you want a practical starting point, run one live or upcoming campaign through a simple audit using actual retailer formats and actual sample claims.
This is where participation quality becomes more useful than raw submission volume. Higher claim numbers are not automatically healthier. If poor evidence quality drives rework, customer support load and delayed fulfilment, the campaign can look busy while performing badly. If two campaigns generate similar claim volumes but one needs half the manual touches and resolves valid claims a day faster, that second campaign is simply better designed.
- Evidence definition: Can you state, in one sentence, what proves eligibility?
- Capture guidance: Does the upload step show acceptable and unacceptable receipt examples?
- Field extraction: Are purchase date, retailer, transaction reference and product lines all captured or validated?
- SKU controls: Are eligible barcodes mapped to current pack variants and promotional exclusions?
- Duplicate policy: Is duplicate detection based on more than one signal?
- Confidence routing: Do medium-confidence claims have a fast review path?
- Operational metrics: Do you track manual review rate, resubmission rate and average decision time?
- Privacy controls: Are receipt images retained only as long as necessary?
- Appeals process: Can customer care explain a rejection with evidence, not boilerplate?
Closing guidance
Purchase evidence control is not a fraud side quest. It is core campaign architecture. The strongest systems are not the harshest; they are the clearest. They tell participants what to submit, test evidence against transparent rules, and reserve manual effort for ambiguity rather than routine clean-up.
There is a broader accountability signal worth noting. On 11 March 2026, The Commercial Appeal reported an alliance between Invisible Sun Technology and Project Aidra focused on accountability in facilities management. Different sector, same pattern: when accountability is designed into the operating model, evidence becomes more useful and disputes become easier to resolve. Promotion teams do not need the jargon. They need explainable controls, measurable outcomes and less operational theatre.
If you want to get control back, do not start with a full rebuild. Start with one live claim journey. Map the evidence contract, inspect where data quality drops, and test whether your current setup can explain every approval, hold and rejection. If it cannot, that is the gap to fix. If you fancy a practical next step, ask your promotions team to test one live claim journey against POPSCAN control options and see where the friction is doing useful work, and where it is just faff.