Unlock the Hidden Power of Tertiary Metrics: Outperform Primary Metrics in A/B Testing for Breakthrough Results

If you have been around the A/B testing scene as long as I have, then you have probably heard the statistic that 25% of A/B tests end up with a statistically significant winning test variant. This is an average success rate across all Optimization programs regardless of size and industry. I have personally worked with some organizations that have slayed this number, and I have worked with others who are well below this number. All that being said, whether you’re new to this space or a testing (pun intended) optimization veteran, this is a good statistic to keep in mind as you progress in your learning. Let’s pause for a moment and think a bit more about what a 25% average win rates means to you and your program.

AVERAGE WIN RATE FOR A/B TESTS

A statistically significant test winner occurs when the primary metric of the test, say conversion rate, shows a lift in performance as compared to the control group. That lift in performance is often at the 95% confidence level or above, hence it is statistically significant as we have discussed in a previous article.

If 25% of A/B tests run on the site result in a winner, what do we do with the other 75% of the tests that were run on the site that did not produce a winner? I like to call these tests INCONCLUSIVE AT THE PRIMARY METRIC LEVEL. I don’t like the term “losers” at all. Once you call a test a “loser,” your executive team stops listening. Trust me on this one, I see it happen all the time in meetings!

Do we tell our senior executive team that those tests were inconclusive and simply move on to the next one? A tremendous amount of time and effort was invested by your Optimization and Analytics teams in order to get those tests live on the site. How can you best leverage those resources already spent and turn those opportunity costs into benefits and/or profits? Talk about reducing the stress of your executive leadership team and instead adding tremendous value! 

Building a next level Optimization program starts with these two concepts:

1.    A well-integrated analytics and testing program

2.    A strategic thinker with business acumen who can come up with the right “Tertiary test metrics”

WELL-INTEGRATED ANALYTICS AND TESTING PROGRAM

Let’s start with looking at what a well-integrated analytics and testing program looks like. And we’ll start from the end state. The output of a well-integrated analytics and testing program is test ideation. Typically, the most impactful test ideas arise from data insights. A well-integrated analytics and testing program will produce data driven insights from the actual tests that you run on your site.

Obviously, you are not going to wire up the entire experience into your optimization platform. This is why the analytics integration is so important. We still see some big companies run tests separately than their analytics program. How are they even measuring? Or producing deep insights? Or driving test ideation? They are a losing a tremendous amount of value and wasting resources.

It may help at this point to look at a specific example of a tool that we recommend to our clients. The tool is often referred to as A4T which stands for Analytics For Target. This is an Adobe product and we love the integration it provides for our clients that are running A/B testing programs on their digital properties. It allows for deep dive analysis on testing results using ANY analytics metric that is in place on the site. It truly is the key for capturing value from the other 75% of tests that are inconclusive.

ADOBE ANALYTICS FOR TARGET (A4T)

What I really dig about Adobe’s A4T integration is that allows for analysis of A/B test results, just as if you were doing an analysis of any other campaign, such as a search marketing campaign or a social media campaign. This ensures that you are presenting data in a way that is consistent with the rest of the organization. This makes it much easier for your senior executive team to understand quickly, which is HUGE win!

While most testing tools provide for the ability to look at results within the tool itself, typically the layout is different, navigating the interface can be a bit clunky and frustrating, and the available metrics/dimensions are different than what is found in your Analytics platform. This can cause for a lot of wasted time in matching data sets, or explaining different data to your senior leadership team, which sometimes can bring up more questions and even a lack of confidence in the data itself. Having your senior executive have trust and confidence in the data and insights your A/B Testing Manager is providing them is paramount. It’s a must have.

Once you have a well-integrated analytics and testing implementation in place, it makes it much easier to work towards identifying the proper Tertiary metric(s) for your A/B tests. And I’m not sure the word “tertiary” does this concept the justice. As I mentioned earlier, these metrics are often times much more important than the primary and secondary metrics.

Let’s take a quick step back though. A Primary metric for an A/B test is the metric you have chosen to determine whether or not the test is a winner. If you feel that your test idea will directly drive orders on the site, then your primary metric may be Conversion Rate. Secondary metrics would be metrics such as Average Order Value (AOV), Revenue, and perhaps Funnel Fallout rates.

Tertiary metrics can best be identified by answering this question:

“If my A/B test is inconclusive, what would I like to learn from the results that would either help me iterate on this test idea or would allow me to provide my senior executive team with actionable insights?”

Again, you don’t want to get into the habit of reviewing A/B tests with your senior executive team and simply passing over all of the tests that were inconclusive. Trust me here, the juice is definitely worth the squeeze. Keep in mind that a lot of resources are spent in the ideation, design, development, running and analysis on the 75% of A/B tests that don’t produce a clear winner. Your goal should be to find the value in those test results.

Here’s a real-world example. An organization has the overall business goal for the year of driving Subscription Signups to their publications. They decided to run an A/B test on the site that is designed to add more personalized content on the article pages with the hypothesis that the personalized content will drive signups. They put strategic thinking time into the question above, and decided that if they could also determine how many articles visitors were reading and if the test design had an impact there, then that would be a huge win.

Key Metrics for Subscription Signup A/B Test

Primary metric – Subscription Signup Rate

Secondary metric – Subscription Process Start Rate

Tertiary metric - Depth of Visit (as measured by Articles read)

I am not going to lie, I am usually more than a bit bummed when an A/B test does not produce a significant winner. I want my clients to succeed big time. Presenting their senior executive team with a test winner and projected annual impact is a huge win. Everyone gets excited in the meeting, and the win often provides much needed momentum. However, I want my clients to succeed closer to 90% of the time, not just 25% of the time. In order to this, there must be a strategy in place that helps identify TERTIARY METRICS for A/B tests run on the site. It’s not always easy to do, but putting the time in up front will pay huge dividends down the road.

So, this test was inconclusive at the Primary and Secondary metric levels, as it did not produce a statistically significant winner. However, as they dug deeper into the data, they came across a significant LEARNING. Depth of visit as measured by article read per visit, increased significantly during the test. This is a huge win for the team. They learned that giving their readers personalized content kept the reader more ENGAGED on the site. As they discussed this with their senior executive team, the whole group got excited because if their readers were staying on the site longer, then they were receiving more VALUE from the site content. This absolutely will lead to more subscriptions down the road.

Everyone in the room came to an agreement that this test was not a failure. The strategic thinking on the part of your Optimization Director’s team which lead to Depth of Visit as the tertiary metric was the key. Your senior level executives picked up a key finding in driving reader engagement, and they are eagerly awaiting the test iteration. They are now fully engaged with your program.

Well, I hope this information was helpful to you as you continue to guide your Director of Optimization along a solid growth path as well as add value to your organization. Having a well-integrated analytics program is paramount to moving towards a first-class testing program. Once that program is in place, Tertiary metrics will transform your program. You can now drop the words “test loser” from your vocabulary and move on to using “What an incredible learning we picked up on this test. I’m so excited to show you the new test iteration based upon the data!”

 Good luck and happy testing!!

Previous
Previous

Building a Sustainable Analytics Practice

Next
Next

The Dark Side of the Leverage Model in Consulting Agencies: Prioritizing Profit Over People