A/B test

The A/B test page in the reports module of Spotler MailPro provides detailed statistics for mailings that include an A/B test. This page allows you to analyze test results, compare different mailing versions, and assess the performance of your test.

This page is filled when one or more A/B tests have been sent within this campaign.

In this article, you will learn more about:

A/B test specifications

ABtest 1.png

At the top of the A/B test page, you will find the active filters. These indicate which sent mailing is being analyzed. The test is conducted based on the metric chosen for determining the winner, such as unique clicks.

Key test specifications include:

  • Test duration
  • Criteria for determining the winner (e.g., unique clicks or opens)
  • Test group distribution

For example, in a test with a duration of two hours, groups A and B each contained 2,064 records.

A/B test results overview

ABtest2.png

The overview section displays the results for each test variant based on unique clicks and opens. Below this, a graph visualizes the performance of both variants over the entire test period.

The graph contains two lines representing Variant A and Variant B. The data is plotted based on the chosen performance metric, such as unique clicks.

A/B test results details

ABtest 3.png

Below the graph, a summary of each test mailing is provided:

  • Summary for Mailing A
  • Summary for Mailing B
  • Summary for the winner mailing (either A or B)

This summary contains the same data available in the general reports module of MailPro.

 

Example: Evaluating an A/B test

In the results overview and the graph in the images above, Variant A recorded 16 unique clicks in two hours, while Variant B had 30 unique clicks, showing a clear difference of nearly 50%.

The graph reveals that immediately after sending, Variant A performed better (10 vs. 7 unique clicks). Later, the gap widened (16 vs. 5 unique clicks), and by the end, both variants recorded four unique clicks.

Some A/B tests yield very close results. In such cases, consider increasing the test variation, extending the test period, or using a larger audience to achieve clearer results.

When comparing the final summary, Variant B had a unique click rate of 2.9%, as did the winner mailing, while Variant A had a lower rate of 1.9%. This confirms the test was statistically meaningful.