Full article
Overview
Most promotion fraud is not especially glamorous. It is usually a valid receipt reused at speed, a barcode entered without a purchase, or a claim that technically looks fine until you check it against the rules. The practical question is not whether to add controls, but which controls catch which failure mode without turning the customer journey into a bit of a faff.
From what we have seen in live claims, effective proof of purchase verification works best as a layered system. Barcode checks confirm the product, receipt evidence confirms the transaction, and rule enforcement tests whether that entry should be accepted in context. Each layer has a job. None is magic. The trade-off, as ever, is friction versus certainty, so the sensible approach is to build only what the offer justifies and test it properly before you ship.
Signal baseline
For a long time, many promotions relied on a simple mechanic: upload a receipt, wait for approval, job done. That was tolerable when forging or redistributing evidence took more effort than most people fancied. That assumption is now dated.
Last Wednesday, in East Sussex, I was reviewing claim traffic and watched a single supermarket receipt appear across submissions linked to IP addresses on three different continents within a short window. The receipt itself looked genuine. The problem was reuse, not authenticity. That is a useful distinction, because it tells you where a basic check will fail.
There is a wider accountability signal here too. On 11 March 2026, The Commercial Appeal reported a strategic alliance between Invisible Sun Technology and Project Aidra focused on accountability in facilities management. Different sector, same operational lesson: if you cannot verify an event and explain your decision trail, you will struggle to justify spend. Promotions are no exception. You are not simply approving claims; you are allocating budget.
The baseline has shifted from trusting isolated submissions to testing whether evidence is genuine, unique and compliant. Miss one of those three and your control model has a hole in it.
What is shifting
The main change is the industrialisation of low-effort abuse. Receipt sharing, copied images and coordinated claiming are easier to organise than they were even two years ago, while image editing tools have become cheaper and simpler. Fancy that.
At the same time, genuine customers are less patient with clunky journeys. If a legitimate claimant needs four attempts to upload a clear receipt on a mobile in patchy signal, many will simply abandon the process. That creates a real trade-off: stronger controls can reduce invalid claims, but badly implemented controls also reduce valid participation.
That is why rejection rates on their own are a poor success metric. A system that blocks 10% of claims is not necessarily doing a good job if a large share of those were genuine customers caught by opaque rules or brittle OCR. Automation without measurable uplift is theatre, not strategy. The useful measures are approval accuracy, duplicate detection, completion rate, manual review volume and the time it takes a legitimate customer to finish the journey.
There is also a related security angle. The Fifth Skill flagged point-of-sale malware risks on 10 March 2026. That does not prove receipt forgery in a promotion by itself, and it would be lazy to say it does, but it does reinforce the broader point that retail transaction evidence should not be treated as beyond question. Evidence needs corroboration.
What barcode checks actually catch
Barcode validation is useful, fast and often misunderstood. Its job is to confirm that the submitted product identifier matches a participating SKU or EAN. In plain English, it tells you whether the claimant is pointing at the right product.
That matters because it catches a common class of bad claim quickly: the wrong item, the wrong pack, or a product outside the promotional range. For a multi-SKU campaign, that can remove a fair amount of manual checking and reduce avoidable approvals.
What it does not do is prove a purchase. A barcode can be scanned from the shelf, copied from packaging photography or shared in a message thread. So the trade-off is straightforward: barcode checks add speed and reduce simple product mismatch, but on their own they do very little against no-purchase claims or receipt reuse.
If a promotion is built on barcode submission alone, you are validating eligibility of product, not evidence of transaction. Sometimes that is acceptable for a low-risk campaign. Often it is not.
What receipt evidence actually catches
Receipt evidence, usually parsed with OCR, moves the system closer to the transaction itself. A decent implementation can extract retailer, purchase date, time, line items and sometimes price or basket value. That means you can test whether the product was bought from the right retailer, within the right period and, where relevant, above a spend threshold such as £10.
This is where proof of purchase verification becomes materially more useful. You are no longer asking, “Is this the right product?” You are asking, “Was this product apparently purchased at the right place and time?” That is a better question.
Still, there is a catch. Receipt OCR validates the contents of the document it can read; it does not automatically prove that the document is unique or untampered with. A real receipt can still be reused by multiple claimants. This is where uniqueness hashing comes in: the system creates a digital fingerprint of the image to ensure it has not been submitted before, shutting down the most common form of organised reuse. A forged receipt can also sometimes pass superficial checks if the workflow relies only on visible text extraction.
Between 11:30 and 13:00 last month, I tested a mobile claim flow and managed to break OCR with nothing more exotic than a folded thermal receipt and slightly dim kitchen light. The fix was hardly glamorous: flatten the paper, improve contrast guidance and let the user confirm extracted fields before submission. That small hack improved readability without adding much friction. Useful lesson. Better capture guidance often beats more complicated modelling.
So the trade-off here is precision versus fragility. Receipt evidence gives you much richer signals than a barcode, but only if image capture, parsing and validation logic are built sensibly.
What rule enforcement catches in practice
Rule enforcement is where many promotion teams either save the budget or quietly leak it. Once a system has product and receipt signals, it can test them against the actual terms of the campaign: one entry per person per day, maximum five per household, only participating retailers, valid dates, minimum spend, excluded products and so on.
This layer catches claims that look legitimate in isolation but fail in context. A claimant may have a real receipt for a valid product from the correct retailer, yet still breach the entry cap. Another may submit a purchase made outside the promotional window. Another may use multiple email addresses from the same household to get round a limit. None of that is visible from barcode checks alone, and not all of it is visible from OCR alone.
The useful distinction is this: barcode and receipt checks assess evidence; rule enforcement assesses eligibility of the claim event. That is why the logic needs to be explainable. If a platform cannot explain its decisions, it does not deserve your budget.
One caveat worth keeping in view: rule engines are only as good as the rules drafted into them. Public contest frameworks, such as the contest rules published by KVI.com on 10 March 2026, are a reminder that terms matter operationally, not just legally. Ambiguous limits create inconsistent outcomes, manual exceptions and customer service faff. Clear rules ship better.
Actions and watchpoints
If you are designing a new promotion, start by matching controls to risk. A high-value prize draw and a low-value cashback offer do not need identical treatment. Heavier controls may cut abuse, but they can also depress participation and increase support overhead. That is the trade-off to price honestly, not after launch when the dashboard looks awkward.
A practical stack for many campaigns is:
Keep the controls privacy-preserving wherever possible. You do not need to hoover up unnecessary personal data to spot obvious duplication or enforce sensible limits. In most cases, minimised data collection and strong audit trails are the better build.
Then test the live journey yourself on a mobile, with poor light, average signal and a less-than-perfect receipt. Do it before launch, not after complaints arrive. If your own team cannot complete the flow without stopping for a cup of tea and a muttered complaint, customers will not either.
Done properly, layered verification protects budget without punishing genuine participants. If your team wants a clearer view of where your current journey is strong, where it leaks, and which controls are worth adding without adding needless friction, bring one live claim path to the table and we can test it against POPSCAN’s control options with you. You will leave with a practical read on barcode checks, receipt evidence and rule enforcement in your own setup, rather than another glossy promise.
Invite promotions teams to test one live claim journey against POPSCAN control options.
- barcode validation to confirm eligible products;
- receipt OCR to confirm retailer, date and line item evidence;
- duplicate and near-duplicate checks to reduce receipt sharing;
- rule enforcement for limits, dates, spend thresholds and retailer criteria;
- clear rejection messaging so legitimate users can recover quickly.