- Behind The Screen
- Posts
- This is embarrassing... but raised our revenue by 40%
This is embarrassing... but raised our revenue by 40%
I'm honestly embarrassed that I haven't thought about this A/B test before.
I'm not even kidding. I'm genuinely embarrassed that I never thought about this before. It's so simple, and yet it increased our in-sample revenue per session by 40% in a split test.
Here's the story.
A few weeks ago, I was watching HotJar recordings (a great tool btw.!). If you ever sat in a HotJar recordings session, you know how boring that can be.
But eventually, I had an epiphany. In the recordings, I noticed how our visitors had to scroll down, up, and then down again on a product page to see the different available colors.
"Holy shit! That must be annoying!" I thought. See for yourself:
Annoying, right? That needed to change. But completely re-designing our product page was a bit too much without even the slightest hint that it would change anything. And what if it actually lowered conversion? Illogical, but you never know.
So, I started an "MVP"-test by flipping the order of the color and size options one our of most-visited products like this:
Not much, but hopefully enough to give a clue whether it's worth pursuing that rabbit hole.
The result? A 47% in-sample increase in revenue per session 🤯
Alright, rabbit hole here-by engaged. I decided to prioritize a full-scale test across our whole store. But that's where I ran into the first problem.
Apparently, my Shopify theme doesn't play well with Google Optimize and wouldn't let me flip the order of elements across the whole site using their plug-and-play editor. So I had to get messy and write my own JavaScript code.
After a few hours of coding, debugging, and debugging some more, and maybe even a minor miracle, I succeeded! I now had a piece of JavaScript that would switch the options for any product with multiple options (color and size) - great success!
Please retweet my thread if you like this
Just a small thing before we proceed to the result.
If you find value in this, please retweet my Twitter thread version of this newsletter here. It helps me spread the value and motivates me to write more ❤️
... and the results!
The results after 5K sessions?
Size on top: $2.58/session
Color on top: $3.6/session
Probability to be best (~significance, but Bayesian): 95%
That's a 40% in-sample increase and a significant result by just flipping the order of the options 🤯
Lesson: Place your most engaged option first.
Now, I could let it run longer and get more data. 5K sessions isn't a lot, and Google recommends running experiments for at least 14 days to account for seasonalities, etc.
But I'm not here to get exact numbers; I'm here to stack minor, iterative improvements. And I think the logic backs the result up.
By putting colors above sizes on the PDP, I reduce the scroll needed to go through the colors. Essentially, I decrease friction. Also, I reduce the likelihood of a visitor missing that the product exists in multiple colors as the colors are in the viewport more often.
A sidenote about signififance
It's a misconception that the "result" is "true" as long as the significance/p-level is above the 95%/5% threshold. It's not necessarily. For the formulas behind those numbers to hold up in the "real world," the underlying data generating process must meet particular assumptions. If these assumptions aren't satisfied, the formula breaks. It'll be like 2+2 = 5.
A non-theoretical way of thinking about it is that the more data you have, the less those assumptions matter. That's also why Google recommends that you keep an experiment alive for at least 14 days to increase the likelihood of you having enough data points.