How to create and manage A/B Tests
Creating an A/B Test
Before you get started with creating and running experiments, please ensure that you have integrated our latest SDK, and have set up the Remote Config call correctly.
- Create a test: Navigate to A/B Testing → Click the Create button.
- Define settings:
- Give it a name
- Select which build the test should target
- Choose enrollment rate (e.g: 10 meaning 10% chance a user will be added to the experiment)
- Optionally apply filters to limit the test to specific segments of users, e.g. by country, platform, acquisition channel, etc.
- Configure variants:
- You can create up to 10 variants
- Each variant is defined by a config key and value (e.g. shop_layout = A, shop_layout = B)
- Distribute users evenly or customize the allocation across variants
- Schedule your test: Set a start and end (optional) date for the test. You can also choose to automatically end the test once a goal metric has reached statistical confidence.
- Launch the test: Click Start Test. The GameAnalytics SDK will automatically assign eligible users to variants and begin tracking their behavior from the first session.
If one variant is already known to perform well (e.g. higher revenue or retention), you can adjust the user distribution in future tests to send more users to that variant—helping you get the most value while still testing new ideas.
How to test A/B experiments on your device
To test an experiment setup:
- Use the configureUserId method in the SDK to assign a new user ID.
- This makes your device appear as a new user.
- Make sure you're using the latest SDK version for this to work correctly.
This lets you simulate a new user and verify your A/B experiment setup before releasing it.
Managing A/B Tests
Completed Tests
An experiment is considered complete when it has run for the pre-determined amount of time or for enough time to acquire enough users for the statistical models to determine if there is a clear winner.
However, experiments will continue to run (i.e. acquiring new users and applying the Remote Config settings) until you decide to stop it. Once stopped, the experiment will stop acquiring new users and the results will no longer be calculated. Note: The experiment results will still be available for your analysis.
Stopping tests
You can also pause enrollment or stop your tests altogether after launch.
- Stop acquiring users: Halt enrollment without ending the test. Users who had already been assigned to a variant will continue to receive variant configs. The metric results table will continue to be updated as long as the experiment is active.
- Stop: Ends the test. No more users will be assigned, the winning model will not run, and the results will stop calculating. Note: The experiment results will include only up until the stopping point.