Quill's Thoughts

AI-generated purchase evidence risk: practical rule design for barcode and receipt controls in UK promotions

Practical UK rules for proof of purchase verification as AI-generated receipts and barcodes become easier to fake, with evidence-led options for safer promotions.

POPSCAN Playbooks 12 Mar 2026 9 min read

Article content and related guidance

Full article

AI-generated purchase evidence risk: practical rule design for barcode and receipt controls in UK promotions

Overview

AI tools have lowered the effort needed to create convincing-looking receipts, barcodes and purchase screenshots. That does not mean every UK promotion is suddenly flooded with false claims. It does mean the old shortcut , if it looks tidy, it must be real , is no longer safe enough. For promotion owners, the useful question is operational: which controls protect margin and brand trust without turning a simple claim into a minor ordeal?

As it stands, the strongest response is layered proof of purchase verification. Not one grand fix. Not a panic purchase dressed up as strategy. The better option is to connect barcode logic, receipt consistency checks and review rules from the start, then tune them to reward value, campaign scale and likely abuse routes. That is worth a closer look because the commercial implication is clear: weak evidence rules distort not just fraud loss, but forecasting, support load and campaign learning.

Signal baseline

The baseline signal is straightforward. Digital claim journeys now rely heavily on uploaded receipt images, screenshots and typed product details, while AI image editing has become cheaper and easier to access. Ofcom’s published tracking of online behaviour has long shown that UK consumers are comfortable with smartphone-led tasks at scale. Convenient, yes. But convenience also widens the attack surface when submitted images are treated as if appearance alone proves a purchase.

The compliance position has not changed simply because the tools have. The CAP Code still requires promoters to run promotions fairly, administer them properly and make significant conditions clear. The Advertising Standards Authority has repeatedly focused on clarity of entry terms, verification steps and award processes. Put plainly, if a campaign promises an easy claim and only reveals meaningful checks after submission, the friction becomes a risk in its own right. A strategy that cannot survive contact with operations is not strategy, it is branding copy.

There is also a useful caveat in broader public data. The Office for National Statistics quarterly personal well-being dataset and its local authority series show that confidence, anxiety and day-to-day sentiment vary by quarter and by place across the UK. That is not fraud evidence and should not be stretched into something it is not. What it does suggest is that consumer response to value-led promotions is uneven, especially when household pressure shifts. If your validation rules are clumsy, dissatisfaction tends to arrive faster than any formal fraud diagnosis.

A more practical baseline comes from standards. GS1 guidance is clear that GTIN and barcode structures exist to identify products consistently across supply chains. They are not, on their own, proof that a specific person bought a specific item on a specific date from a specific retailer. That trade-off matters. Teams that use a barcode as the whole answer are asking a product identifier to do the job of transaction evidence.

What is shifting

The real change is not that fake receipts exist. They have existed for years. What is shifting is the speed, quality and volume at which synthetic purchase evidence can now be produced, adjusted and resubmitted. A claimant no longer needs design software, patience or much skill. They can vary dates, totals, retailer names and line layouts in minutes. The practical result is noisier review queues and a more expensive version of false confidence.

Three movements sit behind this. First, image generation and editing tools are now mainstream. Second, more campaign traffic arrives through mobile channels, where compressed images, screenshots and partial crops are common. Third, brand teams are under pressure to reduce friction and approve quickly, particularly in FMCG, grocery and health and beauty promotions where reward values are modest and volume is the point.

Broader fraud reporting from bodies such as UK Finance and Cifas is not promotion-specific, so there is a caveat. Still, the pattern is familiar enough: abuse tends to scale where checks are static, predictable and easy to test repeatedly. If one receipt format passes every time, bad actors will keep iterating around it. To be fair, promotion workflows are unlikely to be exempt from that logic.

There is a second shift which matters commercially. The issue is not only reward leakage. It is declining data quality. Inflated submission volume can make a campaign look healthy while first-pass validation falls, duplicate rates rise and retailer evidence no longer lines up with expected sell-through. Growth claims without baseline evidence should be parked until the data catches up.

In a strategy call this week, we tested two paths and dropped one after the first hard metric came in. The looser route looked elegant on paper, but duplicate image similarity and repeat-device submissions rose quickly in test traffic. The tighter route added one structured field and a barcode cross-check, and approval time barely moved. That is the decision pattern now: not “secure or simple”, but which minimum controls improve economics without bruising the customer journey.

Who is affected

High-volume consumer promotions are first in line, especially where broad retailer coverage and short redemption windows create pressure to approve quickly. FMCG, grocery, household and health and beauty offers fit that profile. Agencies and fulfilment partners are affected as well, because they usually inherit the backlog once suspicious claims pile up and support queues start creaking.

Retailers are exposed indirectly when promotions rely on different till formats, online order confirmations or store-specific product descriptions. If your rules do not account for legitimate variation between chains, good claims get blocked. If the rules are too broad, fabricated evidence slips through. The operational work sits in that uncomfortable middle, which is precisely why rule design should happen before launch rather than halfway through a complaint spike.

Consumers are affected in a simpler way. They want a clear path, a fair decision and a reasonable turnaround. The Competition and Markets Authority’s approach to consumer protection has consistently favoured clarity and fairness in promotional conditions. So if a receipt must show the retailer name, purchase date, line item and total, say so before the upload stage. If a barcode must match a promoted SKU, say that as well. Ambiguity tends to help the bad actor on entry and punish the genuine shopper during review.

Internal teams feel the strain differently. Legal wants defensible terms. CRM wants conversion. Finance wants leakage contained. Support wants fewer edge cases. Fraud and data teams want cleaner signals. A plan looked strong on paper, then one dependency moved, so we re-ordered the sequence and regained momentum. Usually that means a staged model: automate what is cheap and reliable to test, queue what is commercially material and reserve manual effort for the genuinely uncertain cases.

Practical rule design for barcode and receipt controls

The most useful operating model is layered. Start with eligibility rules that are easy to explain and easy to machine-test. Then add risk scoring to decide which submissions pass automatically, which trigger an extra prompt and which go to manual review. This is the point at which barcode and receipt controls stop being a technical afterthought and become part of campaign design proper.

At the barcode layer, test whether the submitted code matches a valid promoted SKU list, whether the pack size fits the offer and whether the same GTIN is being claimed unusually often from the same account, device, address or other cluster. GS1 standards support this type of product identity checking, but they do not confirm a transaction occurred, so barcode checks should always sit alongside receipt logic.

At the receipt layer, test internal consistency. Does the retailer name match an approved retailer list? Does the purchase date sit inside the promotional window? Do line items align with promoted products and plausible pack descriptions? Does the formatting broadly resemble that retailer’s expected till or order-confirmation style? Optical character recognition, or OCR, can extract these fields automatically, but confidence scores matter. Low-confidence OCR is not proof of fraud. It is a signal to request a clearer image or route the claim for review.

A sensible minimum rule set often includes:

There are trade-offs. Tight thresholds block more suspicious claims, but they also catch genuine shoppers with poor lighting, crumpled receipts or accessibility constraints. Loose thresholds reduce customer friction, but leakage rises and support teams quietly become the fraud filter. Neither extreme is especially clever. The better option is adaptive control: keep the standard journey simple, then add friction only where signals conflict.

If campaign materials show a sample receipt image, make the accessibility useful rather than decorative, for example: . The alt text should explain the validation purpose, not merely label the picture.

  • one claim ID per purchase event, with deduplication using image hash and near-duplicate matching;
  • barcode validation against eligible SKUs, variants and pack sizes;
  • receipt date and retailer checks against published campaign terms;
  • device, IP and account velocity checks over short periods;
  • review triggers for altered totals, inconsistent fonts, improbable basket combinations or repeated submission patterns.

Actions and watchpoints

The next move for most promotion owners is to build a baseline before tightening anything. Track first-pass validation rate, duplicate submission rate, manual review share, time to approve, blocked claim rate and support contacts per 1,000 submissions. If those metrics are absent, any fraud debate is running on instinct and whoever spoke last.

Once the baseline is clear, decide where friction belongs. High-value rewards, multi-buy mechanics and broad retailer eligibility usually justify stronger pre-approval rules. Lower-value campaigns with narrow SKU sets may work better with lighter front-end checks and more audit sampling after submission. The option set should be explicit, because every route has a trade-off. A broad campaign can stay broad if you accept tighter review logic. A very smooth claim journey can stay smooth if the reward economics can absorb more audit effort. What rarely works is pretending you can keep unlimited openness, instant fulfilment and negligible abuse at the same time.

Watch four signals in live campaigns. First, sudden spikes in submissions from the same device families or network ranges. Second, approval rates that remain oddly high while retailer evidence or expected channel mix does not support the pattern. Third, customer service tickets about unexplained rejections, which often point to poor wording rather than criminal ingenuity. Fourth, a widening gap between total submissions and validated unique purchase events. Those are the points where assumptions should be challenged before leakage becomes habit.

Cleaner evidence is not only a defence mechanism. It improves forecasting, retailer conversations and post-campaign learning. When claim data is trustworthy, offer design gets sharper and future campaigns become easier to price, support and defend internally. If you want a calm assessment of where your current receipt and barcode controls are too loose, or simply too clumsy, contact Kosmos. We will help you design a proof of purchase verification model that fits your campaign economics and stands up when real operations get involved.

If this is on your roadmap, the POPSCAN team at Kosmos can help you run a controlled pilot, measure the outcome, and scale only when the evidence is clear.

Take this into a real brief

If this article mirrors the pressure in your own workflow, bring it straight into a brief. We keep the context attached so the reply starts from what you have just read.

Related thoughts