A/B testing, a.k.a. “split testing,” is a highly effective way to pick the best version of a website, app or marketing strategy. When making important business decisions, it is important to have tangible evidence to point business owners in the right direction.
A/B testing pre-dates the creation of websites and apps, having first been implemented in farming, medicine and other industries. Business owners and marketers alike now utilize the strategy to improve sales and overall customer satisfaction. At its core, A/B testing compares two different versions of the same concept, whether it be two versions of a web design layout, two versions of app content, or something else entirely. A/B testing helps determine which version is the “superior” one, with superiority being defined by the goals of those conducting the A/B test.
Generally, apps and websites are ideal for A/B testing as they record large amounts of data in a short period of time.
The first step to conducting an A/B test in marketing is choosing which element to test. The chosen element may be something as small as font size, or as large as the entire layout of a website. Then, you must create the two versions of that element to compare.
Next, you need to define a metric of success. Success may be determined by a multitude of factors: time spent on an app or website, frequency of checking an app or website, probability of purchasing a product from an app or website and the number of “clicks” on a certain element are some common metrics of success that marketers use. Much of this data is easily accessible using app or website analytics.
Marketers randomly pick two groups, and assign one of the elements to each group. From there, marketers take a look at analytics and determine which version of the element achieves more desirable results. Frequently, the “better” version of the element is then implemented full-time.
Choosing potential metrics of success is one of the most important steps in the A/B testing process. It is best to start with broad business goals (such as increasing sales). Then narrow down those broad goals to tangible results (such as website traffic).
The percentage of users who engage in the desired behavior is the “conversion rate.” So, if the metric of success is the number of clicks on a specific button, and 40 out of 100 users in one group click on that button, that is a 40% conversion rate. The conversion rate is the number that marketers are mostly concerned with in A/B testing, but it is also important to take into account the sample size and margin of error. If a control group has a 40% conversion rate, and the variation group has a 50% conversion rate, this is a “10% lift” difference. Lift just refers to the percent difference in conversion rates between the two groups.
While marketers frequently like to conduct A/B testing for a variety of elements, sometimes more than one at a time, the most significant results will arise when only changing one or two independent variables. If the test changes the layout, font size, button colors and text color, then it is difficult to know which specific element or elements created the differences. Changing elements one at a time may mean running a larger number of tests overall, but it makes the results much easier to interpret.
While marketers frequently like to conduct A/B testing for a variety of elements, sometimes more than one at a time, the most significant results will arise when only changing one or two independent variables. If the test changes the layout, font size, button colors and text color, then it is difficult to know which specific element or elements the differences can be attributed to. Changing elements one at a time may mean running a larger number of tests overall, but it makes the results much easier to interpret.
When assigning groups, it is crucial to assign them randomly to avoid bias in the results. One potential problem to consider is any differentiation between mobile and desktop users when testing an app or website. It is entirely possible that people spend longer on a desktop device than a mobile one, which could affect results. Therefore, it is best practice to ensure that neither group has significantly more desktop or mobile users than the other. This practice is known as “blocking.”
Though randomization helps eliminate contributing factors, it is still important to consider if any external factors affected the data. Something as insignificant as the time of year could skew results. Conducting large A/B tests frequently can help control for external factors.
While the general goal is increasing outreach and sales, A/B testing can improve a business in many ways. A/B testing may be used to determine likelihood of making a purchase, but can also be extended to determine likelihood of subscribing to a newsletter or clicking on an add. It may measure the amount of traffic a website generates, or the amount of time users spend on an app. It is these small differences that lead to a greater difference in bottom line sales.
With technology providing more analytics than ever, it is easier than ever to leverage this insight to promote business growth. While A/B testing for slight changes may seem like overkill, it is so easy to track there is no reason not to… Especially if it could gain customers and increase sales.