four cups of coffees with a different amount of milk in each creating a color palette

A/B testing can help marketers optimize content and increase conversion rates. However, there are a few things to consider when conducting an A/B test.

When it comes to the design, copy and structure of landing pages, ads, emails or CTAs, Marketer often rely on their own gut feeling and intuition. Of course, this is not a necessarily bad approach, however, it needs to be put under the test now and then. To really know what users, leads and customers prefer, A/B testing is the perfect tool.

A/B testing helps companies to increase their conversion rate and get to know the behaviors of target groups better. According to a study by Econsultancy and RedEye, 72 percent of marketers surveyed even view A/B testing as „high-value“ (PDF download).

What is A/B testing?

A/B testing involves comparing two versions of a web page, email, etc. to determine which version produces better results (conversion rate, impressions, etc.). Half of the users are shown the original version (control) and the other half see a slightly modified version (variation) of the content. A statistical analysis is then performed to determine which variation is more suitable for a particular goal. After A/B testing, the successful variation is chosen for all users.

Here are some tips marketers can follow to optimize their A/B testing:

  1. Select and isolate the right elements

When planning the A/B test, you need to consider which asset should be changed. It often makes sense to select high-traffic areas that still result in low conversion or click rates. In general, the higher the user group, the more informative the insights.

Within any area/asset, there are many different elements that can be tested. These include, for example, colors of CTA buttons or titles of editorial content. In order to be able to assess how effective a change is, the corresponding element must be considered individually.

In other words, the variation should not contain too many changes. In fact, the best results and learnings can be gained if you change one single element and test it. Otherwise, you cannot be sure which element was responsible for a change in performance.

  1. Small changes can make a big difference

You should also be aware that even small changes can have a demonstrable impact on conversion rates.

Marketing expert Perry Marshall cites an A/B test in his article that evaluated the conversion rate of two ads. The only difference between the ads was a single comma. Despite this seemingly irrelevant detail, the version with the comma improved the conversion rate 0.28 percentage points over the plain version (source: Wordstream). At first glance, this doesn’t sound like a noticeable change, but with the size of the audience, this can add up, especially combined with other optimizations.

  1. Focus on one target metric

Furthermore, it makes sense to focus on just one goal as well. That means picking a specific metric that you can use to determine whether the slightly different version is more successful than the plain version. The target metric itself can be varied, from clicks on a button or link to product purchases and newsletter signups.

For example, if you are testing two versions of a landing page and have chosen generated leads as the metric, you should only concentrate on these as a result and not on click- or view-rates.

  1. Formulate a clear hypothesis

Once the right elements have been selected, marketers need to formulate hypotheses that are as clear as possible, so they can be proven (or disproven) with the testing. These are usually intended to solve a specific problem. By formulating the hypotheses, you can later draw reliable conclusions from the test results.

What makes a good hypothesis?

Expectation: What should be achieved with the variation? Ideally, this expectation is analytically supported (customer studies, best practices, benchmarks, etc.)

Measurement: How will the outcome be measured? What value determines success (e.g., conversion, click-through rate, impressions, etc.)

Value added: What is the value of this change to customers and users (e.g., more intuitive view, positive reaction to a text or image, etc.)

 Example:

Based on low click-through rates on our site and best practices in our industry, we assume that changing our CTAs from a text link to a labeled button will result in a higher conversion. Success is easily measured by the click-through rate on the page. Buttons make our CTAs more obvious, and customers can more easily decide what to do next on our site.

By the way, a simple help to formulate precise hypotheses is the Hypothesis Kit by Craig Sullivan.

  1. Allow enough time, …

… to arrive at the right results. You need a sufficiently large data set to draw your conclusions. Larger data sets not only provide more accurate results, but also make it easier to identify typical deviations from the average result.

However, meaningful results from A/B testing don’t usually happen overnight. Be patient when designing, running and evaluating A/B tests. Ending a test prematurely may feel like a time saver but could end up costing money.

  1. …but don’t run tests unnecessarily long

If tests run too long, search engines might see it as an attempt to deceive the algorithm by offering two nearly identical pages. Once marketers have successfully carried out an A/B test, it is therefore advisable to remove the poorer performing version immediately.

Pay close attention to when the data set is large enough. How long this takes depends in particular on how many users access the selected content. Therefore, it’s important to select assets with heavy traffic, so the testing phase doesn’t have to be too long.

  1. Gather feedback from users

While A/B tests show what users prefer, they don’t explain why users prefer something. It can therefore be useful for marketers to gather feedback directly from users.

You can use online surveys, for example directly on the website that ask users why they just clicked on a certain CTA or filled out forms. This gives companies concrete information about why their users behave in a certain way. This information can in turn be applied to further optimization.

Be mindful, though, that these surveys are not intrusive or too long, so the users don’t seem them as a negative interaction with your company.

  1. A/B tests are a learning process

If neither of the two tested versions performs better, it is usually marked as inconclusive, and the simple version is retained. However, the effort was not in vain.

Even from failed A/B tests, companies can gain valuable insights for addressing target groups. Moreover, the data gained can be used to develop a new test that may look at other elements of the asset.

The key to success with A/B tests is to see them not as definite answers but as action points to optimize (or don’t optimize) and to look for ways to enhance the user experience. It is therefore best, to do A/B tests not as singular projects but instead use them as ongoing ways to optimize your assets.


Use artificial intelligence to find out which leads and customers would react positively to campaigns and emails with Uplift Modeling to address the right contacts and gain better results. Find out more in our white paper. 

New call-to-action


Icon PersonMilos Kuhn is a student and works for the Corporate Communications-team at ec4u. He is currently studying media sciences at the Hochschule der Medien, Stuttgart.

Kontakt:
LinkedIn