Skip to main content

How to create and manage A/B Tests

Creating an A/B Test

info

Before you get started with creating and running experiments, please ensure that you have integrated our latest SDK, and have set up the Remote Config call correctly.

  1. Create a test: Navigate to A/B Testing → Click the Create button.
  2. Define settings:
    • Give it a name
    • Select which build the test should target
    • Choose enrollment rate (e.g: 10 meaning 10% chance a user will be added to the experiment)
    • Optionally apply filters to limit the test to specific segments of users, e.g. by country, platform, acquisition channel, etc.
  3. Configure variants:
    • You can create up to 10 variants
    • Each variant is defined by a config key and value (e.g. shop_layout = A, shop_layout = B)
    • Distribute users evenly or customize the allocation across variants
  4. Schedule your test: Set a start and end (optional) date for the test. You can also choose to automatically end the test once a goal metric has reached statistical confidence.
  5. Launch the test: Click Start Test. The GameAnalytics SDK will automatically assign eligible users to variants and begin tracking their behavior from the first session.
info

If one variant is already known to perform well (e.g. higher revenue or retention), you can adjust the user distribution in future tests to send more users to that variant—helping you get the most value while still testing new ideas.

Version selection

When you create a new A/B test, the first step is to choose a version. This determines which value types and character limits are available for your control group and variants.

VersionSDK compatibilityValue typesMax characters per value
Backward-compatibleAny SDK versionString2 000
EnhancedRequires a compatible SDK versionString, JSON100 000

Choose Backward-compatible if you need to support older SDK versions or only need string values. Choose Enhanced if you want to use JSON values or need longer value strings.

warning

If you select the Enhanced version, make sure your game uses an SDK version that supports JSON values. Players on older SDK versions will not receive the test configuration correctly.

SDKs Compatible with Enhanced version

  • C++ SDK version 5.0.0 or higher
  • Android SDK version 7.0.0 or higher
  • iOS SDK version 5.0.0 or higher
  • Unity SDK version 8.0.0 or higher
  • Unreal SDK version 6.0.0 or higher

User limit

By default, new A/B tests run without a user limit. The test accepts players until you stop it or it reaches the platform maximum of 1 000 000 users.

If you need a specific cap, click Enable User Limit and enter a value between 2 000 and 1 000 000. The test stops acquiring new players once it hits that number.

info

A/B tests are limited to a maximum of 1 000 000 users regardless of whether you set a manual limit.

Value types

When using the Enhanced version, you can choose between two value types for your control group and variants:

  • String allows you to enter plain text values.
  • JSON opens a full-screen JSON editor with syntax highlighting. Use this when your game expects structured configuration data.

To set the value type, use the value type selector next to each variant's value field. The selector switches between String and JSON. This works the same way as the value type selector in Remote Configs.

When JSON is selected:

  • The variant value field displays a {JSON} marker if a value has been set, or remains empty if no value is entered yet.
  • Click the value field to open the full-screen JSON editor.
  • Hovering over a JSON value in the test summary shows a tooltip preview of the JSON content.
tip

If you already use JSON values in Remote Configs, the editor and workflow are identical. The same formatting and validation rules apply.

How to test A/B experiments on your device

To test an experiment setup:

  • Use the configureUserId method in the SDK to assign a new user ID.
  • This makes your device appear as a new user.
  • Make sure you're using the latest SDK version for this to work correctly.

This lets you simulate a new user and verify your A/B experiment setup before releasing it.

Managing A/B Tests

Completed Tests

An experiment is considered complete when it has run for the pre-determined amount of time or for enough time to acquire enough users for the statistical models to determine if there is a clear winner.

However, experiments will continue to run (i.e. acquiring new users and applying the Remote Config settings) until you decide to stop it. Once stopped, the experiment will stop acquiring new users and the results will no longer be calculated. Note: The experiment results will still be available for your analysis.

Stopping tests

You can also pause enrollment or stop your tests altogether after launch.

  • Stop acquiring users: Halt enrollment without ending the test. Users who had already been assigned to a variant will continue to receive variant configs. The metric results table will continue to be updated as long as the experiment is active.
  • Stop: Ends the test. No more users will be assigned, the winning model will not run, and the results will stop calculating. Note: The experiment results will include only up until the stopping point.