The 5 steps to set up an A/B test

5-stappen-om-een-ab-test-op-te-zetten

Every Email Marketeer has experience with A/B testing. But what is an A / B test? Why do you actually test? And what should an A / B test satisfy to be successful? In this article we show how to set up an A / B test in 5 steps.

What is an A / B test?

The definition of an A / B test is:

“A form of split tests in which 2 variants of, for example, a (opt-in) page or ad are tested against each other to increase conversion rate.”

In this article is specifically about the A / B testing of emails. As the definition indicates, the goal of an A / B test is to improve your result. In the case of email, this can lead to different results, such as an increase in the number of openings, clicks, or sales.

Step # 1: Determine why and what you are going to test A / B.

It is important to determine in advance why you want to test. Want to increase sales? Then choose an email that contributes most to your commercial goal or just an email that is still a lot to get.

Of course, you can test everything in all types of emails, but one will have a bigger impact than the other. Therefore it is important to determine which aspect you want to test. Suppose you want to improve the number of clicks on the email button. Then you can test the call-to-action, but also the size or color of the button. That’s why you need to know what you want to test in advance.

Step # 2: Consider which item(s) you will test you at the same time.

At PI marketing we always choose to limit ourselves to testing one item: an A / B test and no multivariate test. We do this because you need many more recipients for multiple items, so it will take longer until your test is significant.

However, you can have multiple results at once. For example: you can test the subjectline and the button. You can also choose to split this into two different tests: you first test the subject and then the button.

Step # 3: Make a plan for your test

Before you begin your A / B test, it’s useful to write a plan which processes your foundings of steps #1 and #2: Why you are going to test? What to test? And what items to test? You will also write down your expected result: the hypothesis.

Your hypothesis could be: “When adjusting the color of the button from orange to red, I expect 20% more clicks.”.

Step # 4: Set up the A / B test and monitor it accurately

Especially in the first days of an automatic campaign it is important to check your campaign regularly. How quick are you booking results? Is everything still going smooth? This of course depends on the size of your test.

For example, you can keep track of results in an Excel file where you set the formulas to calculate significance, conversion, etc. Or you can use an online tool like Google Optimize or VWO.

Step # 5: Review and document the results

In the previous step, you have tracked your results from the A / B test. With this step, you will review the results. By keeping track of the results of the test, keep an overview of what has already been tested and what not.

It’s useful to sort these results on: tested subjects, layouts or call-to-actions. You write down what has been accurately tested and what was the result and of course the winner! But also write down striking results you did not expect. When something is significant depends on your hypothesis. Usually we speak of a successful test at 90% significance, however, you can go for 95% or even 99%. When you pick 99% significance you have a reliable test and the result of the test is no coincidence.

Examples: What can you test?

We can be brief, everything can be tested. However, some elements will have more impact than others and therefore step #2 is so important. We A / B continuously test for our customers. For instance: buttons, colors, subjectlines or sending time. Below are some examples of the elements we test for customers.

– Shipping Time After Shopping Cart *

At this A / B test, we wanted to test whether the email sent earlier had more impact. The test consisted of the original shipment time of 30 minutes (A) after leaving the page opposite 20 minutes (B) after leaving the page. B came here as a winner: 93% at open and 99% on the number of transactions (based on the number of shipments)

* This is the email you receive after leaving the shopping cart

– Less choice in supply

In the e-mail in which we display the last consumer searches (After Search Mail *), we showed the last 5 by default. For this we have used an A / B test by adjusting the offer to 3 searches. This showed that reducing the number of choices from 5 to 3 resulted in a conversion increase of 66% with a significance of 99.12%. That’s how you see that more choices are not always better.

* This is the email that one receives after making a search

– Subjectline

A subjectline is of course something that can be infinitely tested and what is happening continuously. An example of what has won 99% significance is to name the discount in the subjectline. Another example is, “These are the most special destinations,” versus “see the most beautiful destinations”: the last subject has won 68% more open.

 

At PI marketing, the A / B testing of the emails is a top priority to continue to improve our email campaigns and achieve better results. How do you test A / B tests and what outstanding results have you encountered? Let us know below!