👈 Home

How to fail at Conversion Rate Optimization: Run tests when you've already decided what to do


I once worked with a beleaguered marketing director whose primary charge was to churn through an intimidating backlog of A/B tests.

Lots of the test ideas were questionable - changes that I doubted would a measurable impact. But the organization had already decided on the tests; they just wanted help with execution.

As we moved from zero to “decent test velocity,” gradually clearing the backlog, we obtained plenty of inconclusive results. Not particularly exciting, but they did at least de-risk some minor changes to the site.

Eventually, we got approval to go off the roadmap and test a much bigger change. (Presence vs absence of pricing details 💰)

The test produced the biggest win we’d ever seen. I was excited, and proud of the team, and optimistic about the future of the partnership.

That’s when it got weird.

It turned out that this slam dunk win was “an interesting result” that we would “certainly take into consideration in the future,” but we wouldn’t be putting the winner into production.

Why? The answer kept changing. We didn’t trust the metrics. Or we did, but we weren’t sure which aspect of the variation was most responsible for the win. And what about the downfunnel effects? Maybe we should plan on running the test again. Sometime next year.

We were caught up on the backlog, and at this point the team had the skills to keep executing without my support. And I don’t want to take anybody’s money in exchange for data that they flush down the toilet. So we parted ways amicably.

Months later, through back channels, I managed to find out what happened. The “winner” of each experiment was determined in advance, at the executive level. Running experiments was all about validating ideas, not optimizing the site.

This worked great as long as the results were favorable, or inconclusive. But the moment we got real, incontrovertible data that failed to support the desired outcome … we had a problem.


I hope this sounds ridiculous to you - why would somebody pay money for tests they’re not going to act on? But in my experience it’s extremely common.

I’ve shared the most pathological case I ever encountered, but even in a relatively healthy organization it’s easy to get carried away with “You should test that!”

You test some stuff, you get good at it. You test more stuff. Before you know it, you’re testing all kinds of wild ideas. Then you get a winner that makes somebody uncomfortable, and you have to sweep the results under the rug.

It’s not a problem because of the wasted effort and mental energy, but because of the cognitive dissonance it creates, and the unpleasant realizations that come from resolving that dissonance.

Yesterday, your growth team believed they were doing challenging, rigorous research and experimentation to drive a data-driven company forward. Today, as they make their peace with your decision to ignore conclusive data, they’re reevaluating that belief.

They’ll lose motivation. They’ll look around for other opportunities. Eventually, they’ll move on, and you’ll be left with people who are resigned to cherry picking data in exchange for a paycheck.

The remedy is to talk openly and honestly about what’s out of bounds - for your brand, and for your website. This is hard! For example, would you …

… if it increased conversions? Yes, no, maybe?

Give your optimization team clear constraints, and they’ll take you as far as you can go. But if you insist on moving the goal posts mid-game, all the best players will march off the field.


    © 2024 Brian David Hall