Why CRO Programmes Are More Valuable Than You Think

At a Glance...
CRO programmes are often undermined when businesses demand projected returns and measure success purely on test wins — creating a culture where teams play it safe, ignore valuable learnings, and optimise for appearances rather than impact. In reality, unsuccessful tests are just as important as winning ones, because the insight they generate is what drives long-term improvement. The teams that build genuinely valuable CRO programmes are those who treat every result — win or loss — as data to iterate from, rather than a verdict to be judged on.
I was speaking to a friend recently who runs a CRO programme at a fairly well-known brand. Budget season had rolled around and, like clockwork, the exec team had landed in her inbox with a familiar request: project the P&L for the CRO programme. What does it cost to run? What will it deliver over the next one, two, three years?
It’s a completely reasonable ask. Exec teams are trained to look ahead, eliminate unknowns and understand where value is going to come from — and where the gaps might be. Predictability is the language of the boardroom, and that’s not inherently wrong.
But for teams running CRO programmes, this creates a real and often underappreciated challenge. Because when a business starts expecting CRO to project its own returns in advance, it quietly sets in motion a set of behaviours that can be genuinely damaging:
- The business begins to measure success on test wins alone
- Win rate becomes the most important KPI
- Learnings from unsuccessful tests tend to be glossed over or ignored entirely
- Teams start optimising to look successful, rather than driving real impact
And here’s the thing — once you’re in that cycle, it’s hard to spot from the inside.
Some Truths Worth Sitting With
Before we go any further, let’s establish a few things.
If you only count wins as value, you’re dramatically understating what your CRO programme is actually worth.
A huge part of what CRO gives a business is the knowledge of what not to do. That’s not a consolation prize — it’s genuinely valuable intelligence. Think about how many product decisions, design changes and UX updates get shipped by businesses every year based on assumption, gut feel or HiPPO (Highest Paid Person’s Opinion). Some of those decisions are damaging, and those businesses will never know it.
CRO creates a level of self-awareness within an organisation. And just like self-aware people tend to make better decisions over time — more consistently, with fewer expensive mistakes — self-aware businesses do too.
So when a test doesn’t produce the result you hoped for, calling it a “learning” isn’t just a cliché people like me trot out to soften the blow. It’s accurate. It’s mandatory. You need unsuccessful tests to improve in the long run.
Which brings us to something worth being a little suspicious of: CRO programmes with abnormally high win rates. On paper, they look impressive. In reality, teams with inflated win rates are usually testing the wrong things — low-risk, low-ambition hypotheses designed to keep internal stakeholders happy rather than genuinely move the needle. Lots of activity. Not a lot of genuine impact. Group think has well and truly set in.
And one final truth: you cannot accurately project future value from a CRO programme. Win rates, traffic volumes and uplift percentages are inherently unpredictable. Anyone who tells you otherwise is either misinformed or telling you what you want to hear.
Is Your Team Showing the Warning Signs?
It’s worth being honest with yourself here. Run through this checklist and see how many land a little close to home:
- Do you feel nervous before launching a test — not because of the technical setup, but because of how it might be perceived if it doesn’t win?
- After a run of unsuccessful tests, do you find yourself reaching for a “safe” hypothesis — something almost certain to win, just to get a positive result on the board?
- When a test doesn’t perform as expected, does your team move on from it as fast as possible and never revisit it?
- Do you only shout about results when a test has been successful?
If any of those hit home, you’re not alone — but it’s worth addressing sooner rather than later.
What You Can Do About It
Flip the narrative on unsuccessful tests. Try to quantify what you’ve actually saved in the long run by not shipping a change that didn’t perform. Somewhere out there, another business made that exact change without testing it, and they’ll never know the damage it caused. You do. That’s worth something.
Remember the 80/20 rule. In CRO, roughly 80% of your effort should sit in analysis and strategy — understanding what’s actually happening on your site, why users are behaving the way they are, and where the real opportunities lie. Only 20% is in execution. When tests don’t work, the temptation is to skip the analysis and bounce straight into the next test. Resist it. The clues to your next success are almost always sitting in the results of your last experiment.
And remember how you learnt to ride a bike?
You didn’t hop on for the first time and immediately channel your inner Chris Hoy. (If you did, please stop reading this, get yourself down to your local velodrome and speak to someone about the Olympics — you’re wasted here.) What actually happened is you fell off. Quite a lot, at first. Then gradually less. Then one day you were riding to the park with no hands on the handlebars, doing tricks and dreaming of spending your pocket money on racing stripes from Halfords (I realise that last part might just be my own personal experience).
CRO works exactly the same way. Without the refining feedback that unsuccessful tests provide, you simply cannot improve. The falls aren’t the failure — they’re the mechanism through which you get better.
Now imagine that after falling off the bike for the first time, you went home in a strop, picked up a skateboard, and fell off that too. Now you’ve failed twice, you’re no closer to either skill, and you’ve blown your opportunity to impress your crush at the park (yes, me again). The same principle applies in CRO. When a test isn’t successful, many teams will quietly shelve the results, start worrying about their win rate, and pivot to something in a completely different area of the journey — hoping a fresh win will get stakeholders back onside. But in doing so, they’ve walked away from every clue sitting in those results. Clues that would have meaningfully improved the next test.
So here’s the ask: when your next test doesn’t go the way you planned, take a moment. Say a quiet thank you to your experimentation platform for the data. Then use it. Iterate. Go again. (One caveat worth calling out here: this only works if your analytics stack is actually set up to answer the questions that matter. If you’re flying blind on data quality or missing key instrumentation, that’s a conversation worth having separately — but an important one.)
That test → feedback → adapt → re-test cycle is easy to describe. Staying true to it when the pressure is on is a different matter entirely. But the teams who manage it consistently? They’re the ones building CRO programmes that deliver real, lasting, compounding value — the kind that no P&L projection could ever fully capture.
Sound Familiar?
If you worked through that checklist earlier and a few of those questions landed uncomfortably close to home, that’s actually a good sign — it means you’re already thinking about this in the right way. The next step is doing something about it.
At In Digital, our Experimentation team works with businesses to build CRO programmes that drive genuine, tangible value — not just a healthy-looking win rate on a slide deck. We also work with teams to refine their analytics stack, making sure it’s actually set up to answer the questions that matter — because without that foundation, even the best testing process will only get you so far.
Whether you’re looking to get more from an existing programme, build the right foundations from scratch, or make sure your data is working as hard as it should be, we’d love to talk.
Jonny Evans, Associate Director, Experimentation
Ready to improve your performance?
Reach out to one of our team to learn more about our services and how we can help your business thrive.