👈 Home

No more interesting results, please


If your A/B test reporting looks like this, please stop:

We observed a slight lift in Average Order Value, not at significance. Conversions were slightly down, but Visits to Cart were up. Clicks on the new “Your Perfect Shoe” module were higher on mobile than on desktop. This is an interesting result and we will explore it in future tests.

I’ve written reports like this myself, but I’m not going to do it anymore. Here’s what I’ll say instead:

The test yielded no actionable results; the team will conduct a root cause analysis in order to identify why this happened and prevent it from happening again.

The problem with interesting results

Experimentation is expensive. Someone’s paying a lot of money in order to access your precious brain, and its ability to make magical measurable things happen on websites. They’re almost certainly expecting more out of that investment than “Hmm yeah that’s weird lol.”

Even if you’re working pro bono, there’s opportunity cost to consider. You can only run so many tests per quarter. How many test slots are you willing to yield to inconclusive ¯_(⊙︿⊙)_/¯ tests?

Action over interest

A successful test leads to a new perspective. A recommendation, backed by data. An assertion about what the company should do next.

If your experiment results stop short of this, you’ll get limited attention from decision makers in the organization. (And you’ll be on shaky ground when budgets are being set.)

If you’re dedicated to running experiments that yield this level of insight, you’ll probably change some things about how you set up and prioritize tests. You’ll generate more informative data, make recommendations based on it, and stand behind them. (People with budgets will notice.)

Inconclusive tests can yield actionable results

So how do you avoid interesting results? Does that mean that every test has to have a conclusive winner?

Not exactly. If you test 7 very different experiences on your Shopping Cart page, and see no measurable impact on conversions from any of them, you have an actionable result.

The action you’ll take is “stop testing on the Cart.” And maybe “schedule podcast appearance to brag about our super-optimized Cart.”

But you can’t draw this conclusion or confidently take this action if you only tested a single change. You’re stuck noting some interesting findings and trudging forward to the next test, hoping something better happens.


Have a lovely weekend. May your life be interesting and your data be actionable. Next week we’ll look into test prioritization - when it matters, how not to do it, and how it both reflects and shapes your team culture. ☮️


    © 2024 Brian David Hall