Skip to main content

Overview and Use Cases

A/B testing allows you to test multiple versions of your game features, mechanics, content or offers to determine which version performs best. It helps you make data-driven product decisions by isolating the effect of changes on key metrics like retention, revenue, engagement, and more.

Each experiment assigns players into different groups (variants), exposes them to different configurations, and tracks their behavior. Whether you’re experimenting with monetization strategies, feature rollouts, or onboarding flows, A/B testing gives you the confidence to scale only what works.

Use Case Examples

  • FTUE optimizations: Test different onboarding flows to see which version drives more engagement.
  • Monetization strategy: Experiment with bundle pricing, in-game currency packages or ad frequency to find the version that maximizes revenue per user.
  • Feature rollouts: Gradually release a new feature (e.g. crafting, battle pass, tutorial reworks) to a small group before rolling it out more broadly, using performance indicators to validate the launch.
  • Live Ops strategy: Test different in-game event configurations or reward schedules to determine what keeps players engaged during limited-time campaigns.
  • Content tuning: Evaluate changes to difficulty, level pacing or game balancing by comparing player progression, completion rates or churn across variants.
info

A/B Testing works best when you change one variable at a time per test. This keeps your results clean and your conclusions reliable.