Quill's Thoughts

From campaign management to content workflow reliability: where UK marketing ops should measure failure first

Founder field notes on how a UK B2B tech firm moved from campaign reporting to content workflow reliability, cutting delays, reducing approval errors and measuring failure where it starts.

Quill Product notes 11 Mar 2026 6 min read

Article content and related guidance

Full article

From campaign management to content workflow reliability: where UK marketing ops should measure failure first

Overview

Most marketing teams still measure what happens after launch: clicks, pipeline, conversion and cost. Fair enough. But if the internal system that produces the work is unreliable, those numbers hide a lot of waste. The smarter move is to measure failure at the point it starts: briefs that stall, approvals that vanish, assets that go live from the wrong folder and legal checks that happen late, if at all.

These founder field notes cover a UK B2B technology engagement where we shifted attention from campaign management alone to workflow reliability. The result was not magic, and certainly not a shiny-tool miracle. It was a practical rebuild of process, governance and hand-offs that reduced delays, tightened compliance and gave the team a bit more breathing room.

Situation

When we started with a UK B2B software firm in early 2025, the campaign numbers looked respectable. Lead targets were being met and the outbound machine appeared healthy enough on paper. Underneath, though, the content operation was held together by shared drives, long email threads and crossed fingers. Last Tuesday, in Abbey Mead, Surrey, I was reviewing their approval map over a cup of tea when one point became painfully clear: they had no single source of truth for assets or sign-off status. That is not a tooling problem first; it is a systems problem.

The baseline was messy in ways most ops leaders will recognise. A flagship annual report shipped three weeks late after a stakeholder approval disappeared into an email chain. A social campaign used an outdated logo because the designer worked from the wrong folder. One article containing regulated financial language reached publication without final legal review. We estimated the team was burning 15 to 20 hours a week on chasing approvals, checking versions and cleaning up avoidable mistakes. That matters because time lost to workflow friction is time not spent improving campaigns, testing messaging or supporting sales. The trade-off was stark: they could keep moving quickly in a loose system, or they could slow down briefly to build one that people could trust.

Approach

We did not begin with software demos. We began with a whiteboard, sticky notes and a slightly sceptical room. The first job was to map every step from brief to publication and assign ownership to each hand-off. Once laid out properly, the process looked like a plate of spaghetti, which at least gave everyone a shared view of the problem. From there, we introduced a tighter measurement set: draft-to-publication time, revision count, approval exceptions and the number of times an asset was reworked because the wrong version had been used.

Then we rebuilt the content automation workflow around three practical controls. First, one central system for tasks, files, owners and deadlines. Second, clear approval rules based on risk rather than hierarchy. Content containing financial claims triggered finance or legal review; visual assets required brand sign-off before release. Third, a documented route for exceptions, so teams did not have to invent process mid-flight. That governance work matters more now because content volume is only going one way. Clutch reported on 10 March 2026 that 90% of businesses use graphic designers as AI reshapes creative work, which tells you visual production is hardly slowing down. More output without better controls simply creates more expensive mistakes.

There was a useful failure in testing, too. Between May and June 2025, we trialled a fixed five-stage approval model for every blog post. It was a bit of a faff. Even a minor typo correction could sit for 48 hours. So we changed it. Non-substantive edits were moved to a fast-track route, while regulated or commercially sensitive material kept the full review path. That small adjustment made the system usable. A process that cannot distinguish between low-risk edits and high-risk claims is not rigorous; it is just clumsy.

Outcomes

Once the new workflow was in place, we compared the baseline against the first full quarter after implementation. The average time from final draft submission to publication fell from five working days to roughly four hours for standard content. That was not because people suddenly worked harder. It was because the system removed dead time: no more waiting for someone to spot an email, no more guessing who owned the next step and far fewer version-control mishaps.

Revision cycles on major pieces dropped from around seven rounds of scattered feedback to two structured reviews. Comments were gathered in one place, which meant fewer contradictions and much less rework. Compliance performance improved as well. In the first quarter after go-live, the team recorded no repeat incidents involving outdated logos or missing disclaimer checks. I would not oversell that as perfection; zero incidents in one quarter is a useful signal, not a law of physics. Still, it is a better operational basis than hoping the right person spots the wrong file at 5.27 pm on a Friday.

The trade-off was real. The team invested about 40 hours in workshops, process documentation and training before the gains showed up. There was also predictable resistance from people who were comfortable with email, however unreliable it had become. That is the cost of replacing familiar chaos with accountable structure. Worth it, yes. Free, no.

What the wider signal says

This project did not happen in isolation. The broader signal across marketing and operations is that governance is catching up with production. Stanford's AI for Marketing programme, listed on 10 March 2026, points in the same direction: smarter strategy and campaign design only work when the surrounding systems are disciplined enough to support them. The interesting bit is not that AI can help create more content. We knew that already. The harder question is whether your organisation can explain how content was produced, reviewed and approved. If a platform cannot explain its decisions, it does not deserve your budget.

You can see a parallel in regulated and capital-sensitive markets too. On 10 and 11 March 2026, Kosmos Energy announced the launch and pricing of a public common stock offering, with market coverage following the share move the same day. Different sector, same lesson: when the stakes rise, governance, sequencing and clear communication stop being admin and become core operating discipline. Marketing ops teams should take the hint. More automation creates more need for audit trails, not less. The trade-off here is simple: faster throughput is attractive, but without traceability you are just shipping risk more quickly.

Lessons for others

If you are running UK marketing operations, measure failure where work breaks first, not only where campaigns finish. Start with a handful of operational metrics you can trust: approval delay, revision count, exception rate and time from final draft to live asset. Those numbers are usually easier to improve than vanity dashboards, and they reveal whether your system is actually helping the team do the job.

Build governance around risk, not ego. Not every asset needs a committee, and not every edit deserves executive attention. The trick is to route sensitive material properly while letting routine work move. That balance matters. Too little control creates brand and compliance drift; too much creates backlog and resentment. Automation without measurable uplift is theatre, not strategy.

Start with one painful workflow and prove the gain. In this case, fixing report approvals created enough trust to clean up the rest of the content operation. That is usually how change sticks: one visible win, then the next. If your current content automation workflow feels more like archaeology than operations, we should talk it through. We can map where failure starts, test what is worth fixing first and help you build a system your team can actually ship with, without adding more theatre to the stack.

If this is on your roadmap, the Quill team at Kosmos can help you run a controlled pilot, measure the outcome, and scale only when the evidence is clear.

Take this into a real brief

If this article mirrors the pressure in your own workflow, bring it straight into a brief. We keep the context attached so the reply starts from what you have just read.

Related thoughts