A/B Testing

Optimizing your messages with OneSignal's multivariate testing platform

A/B testing is a tool which helps you test up to 10 different versions of a message and understand its effectiveness. It provides insights to improve engagement and help you achieve your objectives.

With A/B testing, you will get insights on which designs or content result in a higher open and clickthrough rate. This is a particularly useful tool for Marketers, especially for Lifecycle and Growth. Insights from A/B testing can be used across the company to reach broader business goals. You can also more confidently improve the performance of your messages.

For example, A/B testing can allow you to understand whether a push notification with an image performs better than a text-only push notification, or whether different text labels on a button lead to a higher clickthrough rate.


How to send A/B Tests

You can A/B test up to 10 message variants to determine which one performs better. Pro and Enterprise plans get 10 variants. All other plans include 2 variants.

You can use the OneSignal dashboard for:

If using Journeys, you can setup "Split Branches" to split a segment of users to receive different message

A/B Test Settings

Select the percentage of your audience that should receive the A/B test messages i.e. 25% means 25% of the segment(s) selected will randomly receive one of the variants. A message with 2 variants (A & B) targeting 25% will have 12.5% of the segment get variant A and another 12.5% get variant B. A message with 10 variants (A-J) targeting 25% will have 2.5% of the users get each variant.

1300

Image showing percentage scaler for variants.

📘

Best Practices: Selecting Target Audience Percentage

By default, the percentage of the target audience to receive the message is set to 25%, however, you may want to change this number depending on the size of the total audience.

For an A/B test to be effective, you must ensure that the percentage of the target audience selected contains enough users that the results cannot be easily skewed. Note also that the more variants you use, the larger your target audience percentage will need to be to ensure that a suitable number of users receive each variant.

While you may be tempted to set the audience percentage to 100%, be aware that this will mean that there will be no users from this segment who have not received a variant who can then receive the "winning" variant.

After some time has passed, you can send the "winner" to the remaining audience

Experimentation Best Practices

Understand the benchmarks.

What do past results look like? This is important to understanding the results you obtain from A/B testing. This will also help you set a practical goal.

Have a clear goal and hypothesis

What results are you aiming to achieve? What do you think is going to happen? These guiding questions will allow you to build an effective A/B test.

Control your experiment

Only change one variable at a time to get clear and helpful insights. To do this, finalize the first variant and create all the other variants. Then go into each variant to change that one variable.

Have a control version

This is the version you would have sent before testing any variables. The results with your control are the baseline to measure the results from your variants.

Test the variants at the same time.

Timing matters so you want to ensure subscribers see the message at the same time and day.

Continuously experiment to optimize your strategy.

There are many variables to test, such as:

  • Subject lines
  • Email layouts (e.g., Image vs. image with text vs. image and text)
  • Different CTA types (e.g., “Start Trial” vs “Claim My Free Trial”)
  • Email copy lengths
  • Offers/promotions
  • Landing pages