Quill's Thoughts

Fake software download spikes: when to slow sign-ups linked to newly created mailbox patterns

A practical delivery assurance note on fake software download spikes, showing when to slow sign-ups linked to newly created mailbox patterns using evidence-led email judgement, clear owners, dates and risk controls.

EVE Playbooks 16 Mar 2026 6 min read

Article content and related guidance

Full article

Fake software download spikes: when to slow sign-ups linked to newly created mailbox patterns
Fake software download spikes: when to slow sign-ups linked to newly created mailbox patterns
Fake software download spikes: when to slow sign-ups linked to newly created mailbox patterns • Photographic • OPENAI

We are seeing software download and trial sign-up spikes tied to newly created mailboxes that pass basic validation but fail any sensible quality check. The operational issue is straightforward: volume looks healthy at the top of the funnel, while engagement, deliverability and reporting take the hit a few days later.

This note sets a baseline, shows what has shifted, and lays out the actions needed to get back to green. The recommendation is not to block more people for the sake of it. It is to apply better email judgement so acquisition teams can slow or challenge risky sign-ups with evidence, not guesswork.

Signal baseline

Our current email validation process, owned by the Acquisition Tech team, is solid for basic hygiene. In the Q4 2025 review, it blocked 99.8% of syntactically invalid or non-existent email addresses. That tells us the form catches formatting errors, missing domains and absent MX records. Useful, yes. Complete, no.

The gap is intent. An address can be technically deliverable and still be poor quality. In February 2026, during the campaign referred to internally as Project Nightingale, a 48-hour sign-up spike passed those binary checks but behaved very differently after registration. Post-acquisition engagement from that cohort was 85% below the campaign average, and first-week bounce rates reached 40%. That is the signal. The implication is clear enough: a pass/fail check confirms an inbox may exist, but it does not tell us whether there is a genuine user behind it or a mailbox created moments earlier to game trial access or inflate downloads.

What is shifting

The pattern is more coordinated than a run of typos or obvious fake addresses. Analysis completed by the Data team on 5 March 2026 found more than 4,000 sign-ups in a single day from a domain less than 24 hours old. Those accounts downloaded the trial within minutes and then showed no meaningful follow-on activity. That sequence does not prove fraud on its own, but it is a strong enough cluster to justify intervention.

Yesterday, after stand-up, ticket OPS-451 was blocked by an alert from our primary email service provider following a rise in hard bounces linked to the February cohort. A quick call with the deliverability specialist cleared the immediate dependency and the new review date was set for 18 March 2026. This is a warning. If we keep treating suspect traffic as healthy demand, platform partners will respond to the bounce pattern before our dashboards do.

Who is affected

The first impact lands with marketing. Jane Foster’s team carried an estimated £12,500 in ad spend during the Project Nightingale spike for acquisitions that produced no return and weakened downstream performance. That pushes up cost per acquisition and muddies channel reporting. If you are optimising paid spend on polluted conversion data, your next budget call is already a bit tight on time.

The second impact is deliverability. The team’s operating target is to keep sender reputation above 95/100. A cohort producing 40% first-week bounce rates puts that under pressure and can affect more than promotional email. Transactional messages to genuine users are then exposed to the same reputation drag. Poor-quality sign-ups do not damage trust in the abstract; they create bounce and engagement signals that mailbox providers use when deciding how to treat future mail.

The Product team also inherits the mess. If around 10% of new users are bots or one-time trial abusers, product analytics stop being reliable enough for roadmap decisions. Activation, retention and feature adoption all become harder to interpret. At that point, the business is not dealing with a form problem. It is dealing with compromised measurement.

How email judgement should work

The answer is not another blunt validation rule. It is a scoring model that assesses risk in context and explains the decision. That is what we mean by email judgement here: a structured, evidence-led view of whether a sign-up should pass, be challenged, slowed or held for review.

The proposed model should score at least four signals in real time:

  • Domain age and reputation: whether the domain is newly registered or associated with disposable email use.
  • IP intelligence: whether the sign-up comes from a data centre, proxy network or IP with a known spam history.
  • Sign-up velocity: whether multiple registrations are arriving from a single IP, subnet or pattern at abnormal speed.
  • Time to fill: whether the form is completed unrealistically quickly, which is a common bot indicator.

Those signals then map to tiered actions. A low-risk score, say 0 to 30, proceeds as normal. Medium risk, 31 to 70, triggers a challenge such as CAPTCHA. High risk, 71 to 100, is slowed, quarantined or blocked pending review. The point is not the exact numbers on day one. The point is having thresholds, owners and acceptance criteria so they can be tuned against live results. Between 10:30 and 11:15 last Thursday, I rewrote the acceptance criteria for the risk-scoring story so tests passed once the edge case of shared office IPs was covered. That sort of detail matters.

Actions and watchpoints

If your plan has no named owners and dates, it is not a plan. Fix it. The path to green below keeps scope explicit and gives each step a checkpoint we can test.

  1. Finalise the scoring model and thresholds
    • Owner: Tom Allen, Data Lead
    • Date: 25 April 2026
    • Acceptance criteria: documented model covering signal weights, risk tiers and decision rules, signed off by the Head of Marketing.
    • Checkpoint: back-test against the February 2026 cohort and show whether the model would have flagged at least 80% of the suspect sign-ups without exceeding a 2% false-positive rate in known good traffic.
  2. Build and integrate the scoring service
    • Owner: Priya Singh, Development Team Lead
    • Date: 30 June 2026
    • Acceptance criteria: API returns a score and reason codes in under 300ms for 95% of requests, with audit logs and monitoring in place.
    • Checkpoint: event logs visible in the acquisition dashboard and traceable to individual sign-up decisions.
  3. Launch tiered actions on a controlled test
    • Owner: Mark Costello, QA Lead
    • Date: 31 July 2026
    • Acceptance criteria: A/B test live on 10% of traffic with clear reporting for conversion rate, bounce rate, challenge completion and blocked sign-ups.
    • Checkpoint: weekly review every Friday from 7 August 2026 with a change log for threshold adjustments.

The main risk at launch is over-correcting and blocking legitimate users. The mitigation is equally straightforward: controlled rollout, reason codes, and a weekly threshold review with Marketing, Data and Deliverability in the room. A tentative full rollout date of 1 September 2026 is reasonable only if the test shows reduced bounce exposure and no material drop in good-quality conversions. If those conditions are not met, the date moves. Better a delayed rollout than a confident mistake.

Protecting acquisition from programmatic abuse needs more than a valid-email tick box; it needs measured email judgement with owners, dates and evidence behind each decision. If you are seeing the same pattern in your sign-up data, we can talk through the signals, set practical thresholds and map a path to green that suits your funnel. Cheers , the next step is a straightforward conversation about what is happening now, what should be challenged, and what can be sorted without slowing genuine users down.

Take this into a real brief

If this article mirrors the pressure in your own workflow, bring it straight into a brief. We keep the context attached so the reply starts from what you have just read.

Related thoughts

EVE operating playbook for UK teams
EVE

EVE operating playbook for UK teams

Build a pragmatic playbook for real-time email quality decisions with measurable, audit-ready email validation controls that balance deliverability, user friction and growth.