Unity Mediation A/B testing

Use Unity Mediation A/B testing to evaluate the performance of two variations of a waterfall side-by-side, analyze both results over a period of time, get insights about which variation is performing better, make an informed decision on which variation to select as the winning variation, and then apply the configurations of the winning group to all of your users to increase your overall earnings.

Note: Unity Mediation A/B testing with functional reporting is available starting from Unity Mediation 0.3.0.

Test the impact of various configurations for your test groups through:

  • Traditional waterfall, bidding, and hybrid (a mix of bidding and traditional waterfalls) line item configurations
  • Adding or removing ad sources, eCPM line items, or bidding line items
  • Changing the waterfall order of eCPM line items

To get started with A/B testing, see Set up an A/B test.

Note: In March 2022, an improvement was made that affects A/B tests created before December 12, 2021 in the Unity Mediation 0.3.0 and later. Previously, the number of requests had been overstated. As a result of this improvement, you might notice that some metrics displayed in the A/B group table have changed significantly for existing A/B tests. Until all of your users adopt your app version with Unity Mediation 0.3.0 or later, requests and related metrics will be inaccurate for both A/B groups and line items.

Best practices for starting an A/B test

When you start an A/B test, consider the following:

  • Although an app can have multiple waterfalls, you can only set up one A/B test per waterfall at a time. When an A/B test for a specific waterfall is complete, only then can you start another A/B test for that same waterfall.
  • It is important to verify the configurations for both groups before you start an A/B test. When the A/B test is running, you cannot make any changes to the configurations of Group A or Group B until the test is finished. This ensures an accurate testing process.
  • Users are randomly assigned to either Group A or Group B. They remain in their assigned group until a test group is determined as the winner and the test is ended.
  • If the same placement IDs are reused for different line items, revenue is estimated proportionally to the number of impressions received for each line item. For the most accurate revenue reporting, we recommend that you use a different set of ad source placement IDs in a waterfall between Group A and Group B.

Best practices for ending an A/B test

When you end an A/B test, consider the following:

  • You can let the test run for as long as you want until you have gathered enough data between the test groups to help you select the winning group. Selecting a winning group automatically ends an A/B test.

    Note: To ensure more accurate test results and to better guide your decision-making, wait at least 14 days or until the test has 10,000 impressions after you start the A/B test. This provides more meaningful data to show which group is performing better and to indicate noteworthy metrics between the groups over time.

  • You can only select one group (either Group A or Group B) as the winning group.

  • The results of both groups are not merged. Instead, the waterfall configurations of the selected winning group are applied to all users. The waterfalls and waterfall configurations of the test group are removed from your active waterfalls.

  • When an A/B test is complete and a winning group is selected, the other group that is not selected can no longer be accessed. To make additional testing changes, you need to start another A/B test.