Tests can not, and should not, stand-in for human analysis and intuition. Just like human intuition can not, and should not, be confused for data-driven test results.

Retailers can use the knowledge they gain by first digging into historical data to deduce what types of behaviors can be expected as a result of certain changes. They can then formulate educated hypotheses and use the proper KPIs to measure their tests which will result in productive and insightful tests.

The combination of human analysis and intuition with the test lessons themselves is how retailers can extract the most value from their optimization tests. 

Here is a step-by-step overview of how data can be viewed and used for optimal results before, during, and after the research process.


#1 Identify Business Targets and Areas for Growth

A good starting point for merchants is to define the business goals they want to optimize for and to evaluate their site data to find out the elements on-site directly influence those goals.

For example, if a retailer decides they want to increase average order value then they can start by looking at different site elements that specifically affect either the number of items purchased per order or the price tier of goods that are frequently purchased.

These elements may include items such as product reviews displayed either on their product description pages or on their cart page or on the goods that they list at the top of their category.

Once the merchant narrows down which elements they believe directly affect their performance in terms of the goal they have chosen, they can then begin to hypothesize which variations of those elements would be best to test.


#2 Create Element Variations That are Data-Driven

The most important mistake to avoid when forming hypotheses for which variations to test is confirmation bias.

For example, if a retailer is primarily known for selling goods to women, it makes no sense to check male-focused imagery or copy against the status quo due to the fact that most of the traffic on the web is probably women.

Choosing test variants knowing which test will win is an implicit prejudice because the test only reinforces what you already believe. By contrast, if a merchant starts with their purchase data and digs into that data by segment, they can frequently discover some trends that can help them form data-driven hypotheses that provide useful insights.

An example of this would be a merchant who discovers certain first-time customers' brand or product affinities and decides to test whether bringing those particular brands or products to the top of their homepage (or other key landing pages) drives more conversions than when other content is in that real estate.

In other words, will there be an increase in conversions if certain brands or lines that first-time converters often purchase are emphasized when consumers first visit the merchant’s site or not?

This is one small way in which merchants can use data in conjunction with their own analysis to better understand the variations in which they build studies.


#3 Derive Deeper Insights Than The Usual KPIs

Testing and optimizing site components for such KPIs, such as conversion rate or average order value, is something that is common to most merchants today. The most fascinating insights come from when traders use tools such as Nosto's Merchandising Insights feature to dig into category, company, or product-specific findings that tell a more comprehensive story.

Let's assume a retailer is known for one specific product line (as demonstrated by what consumers purchase the most) but they do need to sell their other products. They can then form a test around various ways to push visitors to the site to purchase the less popular product lines.

For instance, assume that street wear is known to an imaginary vendor, but they also carry athletic apparel that sells much less frequently. If this merchant were to set up an A / B test of two variations of a homepage banner, each showing one of the different clothing lines and setting the test to optimize the conversion rate, the test would show the banner displaying the line that already sells at a higher rate wins.

Nevertheless, if the retailer were to divide the customers who clicked on one of the banners into two different groups (those who clicked on the street wear banner and those who clicked on the athletic apparel banner), then they could start to draw more valuable insights into exactly what the banner variations accomplished.

For example, are visitors who clicked on the banner for the athletic attire more likely to actually buy athletic clothing?

When it turns out that those who click the athletic clothing banner are actually purchasing athletic clothing at a higher pace, then the retailer now knows how to get more site tourists to purchase the clothing line.

Sure, the athletic apparel banner does not cause as many total sales as the street wear banner, but if the retailer can find out a way to know when to show the banner to consumers who click it then they know that by doing so they significantly improve their chances of attracting those consumers to a line of goods they need to sell more of.

In short, merchants can obtain deep product and brand insights that can guide nuanced eCommerce merchandising approaches by digging a layer deeper into the test data, and not just concentrating on general KPIs.

Over to you

Implementation of a successful evaluation plan is always a slippery slope for online retailers. On the one hand, there are retailers who recognize the value of testing and optimizing, but with no clear purpose or objective in mind, they enforce tests liberally. Such experiments frequently result in inconclusive findings or unhelpful observations because those running them don't know what they're looking for.

On the other hand, retailers have very strong convictions and only run experiments to validate their prejudices. Such experiments are unproductive and those who run them just don't give them space to do what they're supposed to do.

To maximize efficiency, however, the combination of human insight and intuition with the objectivity of test results is how merchants can explore the full potential of evaluating variations of their on-site experience.

Soon, HukApps will come up with an A/B Testing service that’ll help you to understand how specific site-wide experience variations boost KPIs such as conversion rate, average order value, average revenue per visit, and more. With our Continuous Optimization feature, winning variations are implemented immediately, mitigating any revenue loss risk from under-performing research. Stay tuned!