Quill's Thoughts

Explainable sign-up scoring for UK competitions and reward campaigns

A pragmatic guide for UK growth and CRM teams on explainable email judgement for competitions and reward campaigns, with clear thresholds, owners, dates and deliverability watchpoints.

EVE Playbooks 16 Mar 2026 7 min read

Article content and related guidance

Full article

Explainable sign-up scoring for UK competitions and reward campaigns
Explainable sign-up scoring for UK competitions and reward campaigns
Explainable sign-up scoring for UK competitions and reward campaigns
Explainable sign-up scoring for UK competitions and reward campaigns • Process scene • VERTEX

For UK brands running competitions or reward campaigns, the usual problem is not volume. It is what turns up a week or two later in the ESP report. A sign-up flow can look healthy on launch day, then show its workings through bounce spikes, complaint rates and a review queue nobody owns. That is where email judgement earns its keep.

The practical shift is simple enough: stop treating email capture as a binary valid or invalid gate, and start treating it as an operational risk decision. Good entries should pass. Uncertain ones should slow for a second check. High-risk ones should stop, with the reason logged. If your plan has no named owners and dates, it is not a plan, fix it.

Signal baseline

Most validation tools answer a narrow question: can this mailbox receive mail right now? Useful, but a bit thin if you are trying to protect sender reputation. A technically deliverable address can still be poor quality, short-lived or created purely to get through a promotion mechanic.

We saw that the hard way in a Q4 2025 FMCG campaign. Initial sign-ups showed a 97% pass rate on standard validation. Two weeks later, the first send to that cohort came back with a 9% hard bounce rate and a 0.4% spam complaint rate. Those are not rounding errors. They are the sort of signals that can drag future campaign performance down fast.

When we checked the sign-up set properly, the pattern was obvious enough: disposable domains, machine-like local parts, and addresses that looked fresh off a scripted workflow. I was wrong about the effort at first, the data feed was trickier than expected, and the original gate was too trusting. The updated plan added buffers and stricter acceptance criteria: new-joiner hard bounces below 2% within the first month, complaint rate below 0.1%, owner the CRM lead, review date set for 30 days after launch.

That is the baseline point. “Valid” is not the same as “valuable”, and it definitely is not the same as “safe to route straight into email”.

What is shifting

The move is from blunt validation to a layered email judgement engine. Instead of asking whether an address is valid, you score the sign-up risk using multiple signals and decide what should happen next. For competitions and reward campaigns, that usually means a three-way decision:

  • Pass: low-risk entry, accepted immediately
  • Slow: uncertain entry, routed to CAPTCHA, OTP, or manual review
  • Stop: high-risk entry, blocked with a logged reason

The useful signals are not mysterious. They are operational and testable:

  • Domain reputation, including known temporary or disposable providers
  • Mailbox age or signs the account was created very recently
  • Character patterns that look machine-generated rather than human
  • IP, device or geolocation mismatch against campaign targeting
  • Velocity signals, such as repeated sign-ups from one source in a short window

This is where sign-up risk scoring becomes more useful than pass or fail logic. Medium-risk traffic is usually the part worth handling carefully, because that is where false positives live. Yesterday, after stand up, EVE-113 was blocked by the new IP reputation feed. A quick call with Chris cleared it. New date set: staging live by 4 pm on 18 March 2026. Between client calls last week, I rewrote the acceptance criteria for the scoring story, and tests passed once edge cases around newly registered domains were covered.

The non-negotiable bit is explainability. If a user is slowed or stopped, the team should be able to say why in plain English. Not “system flag”, not “failed validation”, but something a growth lead, fraud analyst and compliance owner can all inspect later. That audit trail matters when you are defending a suppression rule, reviewing prize draw disqualifications, or just trying to work out why conversion dipped on a Tuesday.

Who is affected

CRM teams feel the impact first because they inherit the list quality problem. Better deliverability controls at sign-up reduce hard bounces, complaint rates and list churn before they hit the first welcome journey. One retail client moved from a new-joiner bounce rate of 6% to below 1.5% in January 2026 after introducing graded sign-up controls. Owner: Anika Sharma, Head of CRM. Target: keep new-joiner hard bounces below 2% for the rest of 2026, reviewed monthly.

Growth teams get a cleaner measure of performance. A cheap lead is not a good lead if it never reaches the inbox or ends up in the abuse queue. Explainable scoring shifts reporting from raw CPA to something closer to cost per qualified, contactable lead. It also forces a proper risk discussion. If a campaign accepts a 3% to 5% false positive rate on the review path, that decision needs an owner, a mitigation and a review date. Otherwise it is just drift with a slide deck.

Fraud and compliance teams benefit from the log trail. In UK competitions and reward mechanics, decisioning needs to be fair, repeatable and supportable. If you stop an entry, you should be able to point to the reason class, the timestamp, the rule version and the review route. That is operationally cleaner and sits better with an evidence-led approach to fraud prevention and data handling. If you collect email addresses for future marketing, keep consent language explicit, keep opt-out options clear, and keep service messages separate from promotional follow-up. Mixing the two is how avoidable issues start.

Actions and watchpoints

The first action is a threshold workshop. Owner: Head of Data Quality or equivalent. Date: set within the next two weeks, with an initial rule set signed off by the end of Q2 2026. Output: a versioned matrix covering pass, slow and stop thresholds, acceptance criteria for each path, and the named approver for any exception.

The second action is implementation across every capture point, not just the hero form on the campaign landing page. Website forms, microsites, partner imports, CRM uploads and any offline reconciliation flow all need the same decision logic or a documented variant. Owner: lead developer or systems integrator. Date: integration plan agreed before build starts, with a realistic delivery window of two to four weeks plus UAT. Acceptance criteria should include response time, fallback behaviour if the scoring API is unavailable, and logging of every slow or stop reason code.

The third action is weekly review. Owner: CRM operations or fraud operations lead. Date: standing review every week from launch through the first six weeks. Checkpoints:

  • Hard bounce rate from new sign-ups
  • Spam complaint rate on the first send
  • Percentage of traffic routed to slow review
  • False positive rate confirmed through manual checks
  • Rule changes logged with date, owner and reason

The main watchpoint is over-correction. Teams under pressure sometimes tighten rules too far after one bad spike. That creates a different problem: genuine users blocked without a clear path back in. Start conservative. Review medium-risk traffic. Tighten only where the evidence holds for more than a single noisy day. A review queue with no owner is just a slower failure mode.

There is another watchpoint worth stating plainly: platform and campaign rules move. If your competition mechanic depends on social sharing, UGC or embedded data capture, re-check the relevant promotion policy before launch, define exactly what counts as a valid entry, and keep forms short enough that people can complete them without abandoning halfway through. For direct marketing capture, explain how contact data will be used and give people a clear opt-out route. None of that is glamorous. It is just how you keep the path to green open.

How to judge whether it is working

A scoring model is only useful if it changes an operational outcome you can measure. For most teams, the first 30 days after launch should answer five questions:

  • Did new-joiner hard bounces fall below the agreed threshold?
  • Did spam complaints stay within the acceptable range for the ESP?
  • Did the slow path catch enough risky traffic to justify the added friction?
  • Did manual review confirm the threshold is not clipping too many legitimate users?
  • Did every rule change have an owner, date and reason logged?

If the answer to those is vague, the model needs work. If the answer is “we think so”, it needs a better dashboard and a better owner. Cheers, but that is the job.

For teams using EVE, the sensible next step is not a grand redesign. It is a short decisioning review of your current capture flow, your suppression logic and your first-send performance. If you want to map a pass, slow, stop model with named owners, workable thresholds and a proper audit trail, contact Holograph. We can help you build a plan that is actually a plan, with dates, acceptance criteria and a realistic path to green.

Take this into a real brief

If this article mirrors the pressure in your own workflow, bring it straight into a brief. We keep the context attached so the reply starts from what you have just read.

Related thoughts

EVE operating playbook for UK teams
EVE

EVE operating playbook for UK teams

Build a pragmatic playbook for real-time email quality decisions with measurable, audit-ready email validation controls that balance deliverability, user friction and growth.