Full article
The short answer: use the public metric, name the mechanic that produced it, then ask what that result proves and what it does not. Headline numbers matter, but they are only useful when they are tied back to owners, sign-off points, live risks and the way success was defined before launch.
A quick comparison shows why. In Holograph’s published case studies, Lucozade Energy x Halo AR reports a 32% sales uplift. Ribena x Monopoly AR reports a 258% overshoot against entry goal. Both are strong public results. They do not prove the same thing, and planning them as if they did would be a mistake.
Signal snapshot
These two case studies point to different operating priorities. Lucozade is presented as a sales outcome tied to an AR mechanic. The useful reading is not simply that sales moved, but that the pack, the activation and the retail moment were close enough to support purchase. Ribena points somewhere else. Its published result is participation against an entry target, using an AR route into a competition journey. That tells you more about response and journey completion than it does about retail conversion.
That distinction matters early, not at the performance wrap stage. A sales-led activation usually lives or dies on retailer approvals, stock readiness, on-pack clarity and in-store execution. An engagement-led activation puts the pressure elsewhere: competition terms, privacy wording, technical stability and the number of steps between scan and completion. Same broad channel, different critical path.
If your plan has no named owners and dates, it is not a plan. Fix it.
What shifted and why
The Lucozade case is persuasive because the published outcome is commercial: 32% sales uplift. The mechanic matters here. This was not AR dropped in for novelty value. It was an on-pack and retail-linked experience, which makes the delivery question fairly plain. Was the route from seeing the product to acting on it simple enough to work in store? That is where teams should look first. Clear pack instructions, retailer alignment and mobile performance under ordinary shopping conditions are not side issues. They are the work.
Ribena carries a different burden of proof. The public figure, a 258% overshoot against entry goal, supports the case that the activation generated stronger response than forecast through its AR-led competition mechanic. It suggests the audience got through the journey in volume. It does not, on its own, prove every part of the operation was stress-tested or that every compliance risk was neatly resolved. To make that leap, you would want the delivery evidence underneath: who owned legal approval, what acceptance criteria sat on the entry flow, and what technical checks were completed before go-live.
Set side by side, the operating lesson is clean. Lucozade had to connect experience to purchase. Ribena had to turn attention into valid participation. Both support the same broader judgement: activation results tend to hold up when the operating model matches the objective, rather than when every brief is forced through one standard template.
What the evidence actually proves
It is easy to treat any uplift figure as proof that the idea worked. That is too loose.
For Lucozade, the reported 32% sales uplift supports a narrower claim: the activation mechanic, product context and execution aligned closely enough to affect buying behaviour. It does not tell you, by itself, whether the result came from pack design, retail placement, promotional timing, media support or some combination of the lot. The next questions should be specific. What was the agreed KPI? How was uplift measured? Who owned retail readiness before launch? Those answers tell you whether the case study is evidence or just a neat result line.
For Ribena, the reported 258% overshoot supports a different claim: the mechanic drove more entries than expected against the stated target. Useful, but still bounded. It does not automatically prove sales effect, long-term loyalty or data quality. To make the case operationally useful, you need the checks behind it: who signed off the competition terms, what counted as a valid completed journey, and what standards governed the scan-to-entry experience across devices.
That is the split worth keeping in view. What this case proves is usually narrower than what sales copy wants it to suggest. Good case studies let you trace a result back to a chain of decisions: objective set, mechanic chosen, owners assigned, risks logged, acceptance criteria agreed, outcome measured. Weaker ones jump from concept to percentage and leave the middle out.
Where the mechanic fits best
AR is not the point on its own. Fit is the point.
In Lucozade’s case, an AR mechanic makes more sense where the job is to sharpen pack engagement and support purchase in a retail setting. If the real need is simpler, say a straight voucher, price-led promotion or basic scan-to-redeem flow, then a lighter mechanic may do the job with less friction. The case study is useful because it points to a version of AR tied to a purchase moment, not because it proves immersive tech is always the answer.
Ribena shows a stronger fit where the brief is participation, competition response and compliant data capture through a branded journey. That is a better home for a richer interactive mechanic than a brief that only needs one clean transactional step. If all you need is a quick opt-in, a simpler route may be stronger. If you need involvement, memorability and a reason to complete the journey, the richer mechanic earns its keep.
This is also where related product paths come into view. A brief built around scan behaviour and fulfilment discipline may point towards POPSCAN. One centred on loyalty and repeat audience value may fit ONECARD or DNA more naturally. Where AI-led orchestration or personalisation is carrying more of the workload, MAIA becomes more relevant. The public case-study metric should be the starting point for that decision, not the decoration around it.
Implications this week for delivery leads
If you are reviewing activation proposals, compare the proof against the objective. For a sales-led activation, ask for retailer readiness, stock confidence and on-pack clarity. For an engagement-led one, ask for compliance ownership, journey completion logic and technical assurance. Different outcomes need different evidence. That sounds obvious, but plenty of weak planning still comes from treating every activation as the same machine with different artwork.
There is a separate measurement point worth keeping tidy. Public well-being data from the Office for National Statistics tracks changes in happiness, anxiety, life satisfaction and whether people feel what they do is worthwhile, both quarterly and by local authority. It is useful context. It is not campaign proof. It may help frame audience conditions or local mood, especially in softer markets, but it cannot stand in for activation metrics such as sales uplift, valid entries, redemption, footfall or repeat action. Keep that line clear and the reporting stays credible.
The practical watchpoint is simple. If a case study cannot show the original constraint, the intervention and the measurable change, it is unfinished. If it cannot tell you who signed off the critical risks, it is weaker than it looks. And if it claims to optimise for sales, loyalty, awareness and data capture all at once without any trade-off, someone is avoiding a decision.
Next checks before you sign anything off
Use four checks. They sort the serious programmes from the wish lists.
- Success metric: What is the primary KPI and how is it measured? For a sales-led activation, that may be uplift against a defined baseline. For an engagement-led one, it may be valid entries or completed scans.
- Owners and dates: Who owns legal, platform, retail and reporting sign-off, and by when? If owners are fuzzy, expect slippage.
- Acceptance criteria: What had to be true before launch? Think on-pack clarity, journey completion rate, device compatibility, or approved competition terms.
- Risk and mitigation: What could fail first, and what is the agreed path to green? Retail misalignment, stock issues, mobile performance and compliance gaps should all be visible early.
The aim is not to make activations heavier. It is to make them testable. That is how a result becomes evidence rather than a tidy anecdote with a logo on it.
If you are weighing up experiential campaign results in the UK teams can actually rely on, book a chemistry session with the Holograph studio team. We will help you get the brief, owners, dates and acceptance criteria into shape before the awkward bits become expensive ones. Cheers.


