How does GameAnalytics allocate (assign) users in an experiment?
Our A/B Testing feature only allows you to target new players when allocating them in an experiment. When a player installs your game, our server assigns that user an experiment ID, based on certain conditions. Which range from:
They’re a completely new user. So they’re not a returning player, which means we can’t find any information about them in our backend system.
The player is online, which means that there is an active connection between the game client and our GameAnalytics service. This means the game can receive A/B experiment values for the user.
The player falls within the target percentage of users, as defined when you originally set up your experiment.
The user matches all ‘include criteria’ (so your filters) which you used when setting up and defining your experiment.
Or the user doesn’t match any ‘exclude criteria’ (again, your filters) which you used when setting up and defining your experiment.
How does GameAnalytics assign users if we create multiple active experiments?
A single player can’t be in multiple tests. We’ll only assign a new user to one active experiment at a time. In case there are multiple experiments running, our server will randomly assign your player to only one of the experiments that they qualify for (based on the conditions we mentioned above).
It’s good to note, this means that the number of your users assigned to each experiment will be lower than the ‘target users %’ which you defined when you set up your experiment.
Let’s look at an example:
Let’s assume you create two experiments to start at the same time, both set up to target 100% of your new users. Each new player installing your game will be eligible to get randomly assigned in either of these experiments, but can only be assigned to one of them (assuming they meet all of your other criteria). Which means each experiment will have approximately 50% new users assigned, rather than the 100% you defined.
Can we run an experiment on multiple variables (config keys)?
Currently, our A/B Testing’s probabilistic models do not support multivariate analysis. Meaning you can’t run an experiment on multiple variables (at the moment). We’re still evaluating our options to make this possible in future. If this is something you’re keen for us to work on next, make sure to vote for your favourite features, which’ll help us prioritise our roadmap.
But for now, you have two routes you can take to tackle this request:
You could run multiple experiments simultaneously. But only if you don’t need to evaluate the interactions between different variables. The allocated users in the experiments will be mutually exclusive (as they can’t be in more than one active experiment).
You could encode the modification of multiple parameters in your game with a single config key. And then create a single experiment with this config key (up to total 4 variations). This way, you can test various combinations on the variants in your experiments, and try to find the optimal combination. Please note, you won’t see how the two parameters interact with each other, as the statistical analysis won’t have any information about this.
How are offline users treated?
Before we talk about offline users, here’s how our SDK works. Our SDK boots up after a new user opens your game (and provides necessary permissions). It gets working before their first session starts, so that it can fetch details of the specific test to which we assign that user. But we need our backend server to do this.
If the user is offline in their first session, we can’t assign that player into an experiment until they're back online (that’s when our backend servers place them into an experiment and send the values to their device).
If you prefer to have such users (who were offline in their first session) be excluded from the experiment, make sure to select the ‘exclude offline users’ checkbox during the experiment set up. This will make sure that we do not allocate (assign) an offline user to the experiment (even if they meet all qualifying criteria). Read more about setting up an experiment here.
Otherwise (if the checkbox is not selected), our backend will apply the default behavior which is to include the user in the experiment when they get online. Please note that this may mean that the user could potentially see a different game behavior from their next session (when they start receiving their experiment variant config), and hence influencing the experiment by causing a bias in the statistical results.
Once an experiment stops, do the variants’ users keep the experiment values?
Once an experiment stops, the users will be unassigned from that experiment’s variant. When the game session starts again, the player’s variant values will change back to the default as defined in the code. You can create a global Remote Config to enforce altered config key values for users in the game.
What happens if I create a global Remote Config alongside an active experiment?
When you run an experiment, you'll get a config key value. This stays the same while the experiment is running. If you create a Remote Config, it doesn't override anything, including the control group. And if you set up a new experiment, you'll need to define which value the users get. This won't change for the users in the control group, no matter what Remote Config you've got running.
We created a ‘Static control group value’ earlier this year. And it was a huge improvement. It makes it much easier for you to create any separate Remote Configs in your game. But it doesn’t impact any users you have in an experiment that uses that same config key. You can read more about this update here
What happens to users in an active experiment when we release a new build of our game?
That player will remain in that experiment for as long as the experiment is active.
Let’s look at an example:
Say you set up an experiment to target users from build X. And while the experiment is active, you release another build, called build Y. If any users in the experiment update their application from X to Y, they will continue to be part of the experiment. The variant’s config_key values that were defined when you set up the experiment will also not change for those users.
You can head to the Explore Tool to find out the exact split of your experiment users by the game build.
How can we verify A/B Experiments on our test device before releasing a new build?
Once our system recognizes a device (user_id) as ‘returning’, we will not assign it to any more experiments (as the A/B testing feature only considers ‘new users’).
You can reset recognized devices (user_id) by using the configureUserId SDK method. This will set the user id to a new user id, which won’t have been seen by our system yet. Remember to update to the latest version of our SDKs for this to work.
How does the SDK work in trying to get the values for Remote Configs and A/B testing?
We’ve created a flow diagram below. It describes the internal flow process of how our SDK works. It goes over how our SDK gets the config key values for Remote Configs and A/B Testing, in a bunch of different situations.
What happens if a config value related to a specified config key is not available?
If you try to request a config value related to a specified config key from our SDK, but that key isn’t present, then when we call the ‘SDK Method’, which will return the specified ‘default’ value for the key.
This can happen when a Remote Config for the key is not set up in our GameAnalytics tool, or when the device is offline and has not been able to get the remote configs yet. And in a case when the “default” value has not been specified, then the ‘SDK Method’ will return “null” as its default value.