Quill's Thoughts

From campaign automation to approval automation: where UK teams actually lose time

A UK financial services case study showing how fixing editorial workflow automation cut approval times from 12 working days to 3, reduced rework, and gave teams a clearer audit trail.

Quill Product notes 13 Mar 2026 6 min read

Article content and related guidance

Full article

From campaign automation to approval automation: where UK teams actually lose time

Overview

Most UK marketing teams have already spent the money on campaign automation. The awkward bit is that the real delay often sits elsewhere: in approvals, hand-offs and version confusion. In one mid-sized financial services team we reviewed in early 2025, the work was not slow because the platform could not send; it was slow because nobody had a reliable, shared route to sign-off.

This case study is the useful part, not the glossy part. We looked at six months of campaign data, mapped the approval path step by step, then introduced editorial workflow automation only after the process made sense on paper. The result was a drop in average approval time from 12 working days to 3 over the following quarter, with post-approval rework falling from 20% to under 2%.

The situation: a familiar state of organised chaos

When we started with this firm in early 2025, the surface story looked fine. Good team, capable campaign tooling, sensible people. Underneath, the approval process was a bit of a faff. Marketing could draft and schedule quickly enough, but nothing moved until Product, Brand and Compliance had all reviewed the asset, and each team was working from slightly different versions in slightly different places.

One Q3 2025 savings campaign made the problem hard to ignore. The assets were effectively ready in July, but the launch slipped to mid-August after a compliance comment was sent into an email thread that stalled when the recipient went on annual leave. That is the sort of failure that sounds small until you add up the lost time, duplicated checking and the quiet stress it creates across a team.

We set a baseline using the previous six months of campaign activity. Average time from “ready for review” to “approved for publication” was 12 working days. Around 1 in 5 campaigns needed significant rework after final approval because a stakeholder had missed a version or feedback had been interpreted differently by different people. The team described the process as unpredictable, which matters more than it sounds. The Office for National Statistics’ quarterly personal well-being estimates track anxiety as a live national measure, and while that dataset is not a workplace diagnosis, messy systems do tend to make ordinary work needlessly tense. Fancy that.

Our approach: mapping the friction before the fix

We did not start with software. That is usually where teams waste budget. If a platform cannot explain its decisions, it does not deserve your budget; if your process cannot explain its own hand-offs, software will not rescue it either.

So we mapped one recent campaign from first draft to publication in the client’s London office, over a cup of tea and a fair amount of scepticism. The map showed three practical faults. First, ownership was blurred: marketing initiated work, but nobody could say with confidence whether the true final decision sat with Product, Compliance or Brand. Second, reviews were too sequential, so one team waited idle while another team finished polishing. Third, there was no dependable source of truth for comments, decisions and approved copy.

We redesigned the workflow on paper before touching any tooling. Four roles were defined clearly: Creator, Editor, Specialist Reviewer and Final Approver. We set review rules, required all feedback to sit in one place, and agreed turnaround expectations for each stage. The most useful shift was allowing parallel review where risk allowed it, so Brand and Product could assess core messaging at the same time instead of taking turns out of habit. The trade-off was straightforward: more discipline up front in exchange for less churn later. For the first week, some of the team felt the new structure was slower because they could no longer fire off loose comments by email. Then the queue started shrinking.

The implementation: a system to enforce the rules

Once the workflow was coherent, we introduced a lightweight editorial workflow automation setup to support it. Not a giant platform migration. Just enough system to enforce version control, consolidate feedback and show status without guesswork.

We configured tiered approval paths by content type and risk. A low-risk article needed a limited sign-off route; a regulated financial promotion triggered a stricter path involving Product, Brand and Compliance, followed by a mandatory legal check. Every comment, revision and approval was logged to create an auditable record. That mattered for two reasons: it reduced repeated debate, and it gave the team a usable editorial memory rather than a graveyard of email attachments.

We also added simple reporting on turnaround times and bottlenecks. That sounds modest, but it changed behaviour quickly. When delays are visible by stage and owner, vague complaints turn into fixable operational work. Between January and March 2025, I tried a similar reporting pattern in another workflow and hit a small failure: notifications became noise because too many people were copied too often. We fixed it with a simple rule: alert only the current owner and the next approver. Less theatre, more movement.

The outcomes: faster, safer, and more resilient

Over the following quarter, average approval time dropped from 12 working days to 3. Post-approval rework fell from 20% to under 2%. The Compliance team reported a 40% reduction in time spent on routine marketing reviews, which gave them more capacity for higher-risk advisory work instead of repetitive checking.

That is the sort of outcome worth paying attention to because the baseline and the change are both visible. Automation without measurable uplift is theatre, not strategy. Here, the uplift came from removing avoidable waiting, not from pretending software had become cleverer than the people using it.

There was also a resilience gain that did not show up neatly in one dashboard. Last Tuesday, while a blizzard hit Sunderland, Cumbria, with winds around 37 mph and temperatures dropping to about -5°C, a time-sensitive interest rate update still moved through the full approval path and was published on schedule. The weather is not the story, obviously. The story is that the system no longer depended on one person remembering which version mattered from their inbox at home.

Lessons for other UK teams

The first lesson is dull but reliable: fix the workflow before you buy more tooling. Most teams do not have an automation problem first; they have an ownership and routing problem. Start by mapping where work waits, who actually decides, and which content types genuinely need heavy review.

The second is to build institutional memory deliberately. When a legal phrasing decision disappears into an old email chain, the same debate returns six months later wearing a different hat. A proper record of decisions, rationale and approved wording reduces repeat work and helps teams ship with more confidence.

The third is to make governance visible rather than grand. Clear stages, named approvers, exception rules and turnaround reporting are usually enough to improve flow. The trade-off is real: stronger governance can feel more formal at first. In practice, it removes ambiguity, and ambiguity is what burns the days.

There is a broader point here for UK teams trying to modernise content operations. ONS local authority and quarterly well-being data are useful reminders that people do better work in systems that reduce uncertainty, not add to it. You do not need magical AI claims or another sprawling stack to get there. You need a workflow people can follow, audit and improve.

If your team is still losing days to approval loops, conflicting comments and inbox archaeology, Kosmos can help you map the real blockage and build a workflow that is easier to run and easier to trust. If you fancy a practical look at where the faff is creeping in, have a word with the team and we can work through the trade-offs, the metrics and the next sensible step together.

Take this into a real brief

If this article mirrors the pressure in your own workflow, bring it straight into a brief. We keep the context attached so the reply starts from what you have just read.

Related thoughts