A/B testing is a feature within Game Overrides that splits your Override into variants that you set to see what impact they make on your game. For example, you might believe starting players are given too many coins and are progressing through your game too quickly or stop playing too early, missing IAP (in-app purchases) opportunities. You’d set two starting balances: the control and the variant.
Define the Override as an A/B test
- In the Unity Cloud Dashboard, open Game Overrides.
- Select Create Override.
- Give your override a name and description.
- Select Set up as A/B test.
- Select a goal metric from the dropdown.
- Select Next.
The A/B test will target all players. You can limit the number of players getting an alernate configuration by setting a lower weight for the variant and a higher weight for the control. For A/B tests specifically, it is currently not possible to choose a specific Audience or use JEXL for targetting.
Configuring the content
On the Content page, under Variant 1, select Add Keys. Choose your Key name and Value. Variant 1 is the control and will match the default value of your configuration. Add a new variant group using the tab at the top and repeat the same steps, with a different value you want to test. Select Add a variant group to add a second variant. Currently two variants are supported. A best practice would be to test a single change to know whether the variable you’re testing impacts your metrics.
By default, your players are allocated equally between the control and variant group. Select Split Manually if you want to control the individual weightings of a group. We recommend splitting the variants to a level you are comfortable with.
You can change the percentage of players that will fall into each group. The values must add up to 100%. Select Next to continue.
Schedule the test
Schedule your Override with a Start Date.
Select Get required sample size and adjust the parameters of the experiment in the side drawer to calculate how many players need to fall in each of the variants so the experiment is statistically significant.
Set the priority for the Override and select Continue to create the Override.
Viewing the results and reporting
Once the A/B test has been running for 7 days and the required number of participants has been met, you can select Load experiment results from the reporting tab.
You can then interpret the result and see how the change introduced to a subset of players as part of the A/B test impact the chosen metric.
You can also view the corresponding Game Overrides reporting through Analytics events. In your Game Overrides list, select the Override name, then select the Reporting tab.
You can filter by metrics and adjust the time window to view results for today, the last seven days, the last 14 days, the last 30 days, and the last quarter.
Make a decision and end the Override
Once the results of the A/B test have been loaded to review, the associated Game Override will stay active until you make a decision.
Select End Override at the top right of the Override's page select one of the two main options:
- Select End your Override (The variant group will no longer be served the alternate configuration associated to this A/B test and will return to the active base configuration for the project.)
- Select Select a variant to roll out to the plavers targeted by your Override (The variant from this A/B test becomes to new active configuration for all players).
To confirm and apply this choice check the reminder checkbox at the bottom of the prompt and select End Override.
- Run your test with a goal metric in mind. If you wait until after your test is run to think about how you're trying to improve player behavior, you might not run the test optimally.
- Create a control group and one or more treatment or “variant” groups, to compare your change to the default behavior of the game.
- Ensure you have a number of players in each variant group to reach the calculated sample size. This ensures the test is valid, because too small of a group can affect the accuracy of the test.
- Test a single change at a time to see if the change is impacting your metrics. Testing multiple changes makes it difficult to know which changes affect the metrics.
- When running multiple tests, ensure that you’re not running overlapping tests against the same variable. This makes it easier to see which variable affects which metric.