A/B Testing

Celestin Ntemngwa
2 min readDec 18, 2020

A/B testing is a method of testing that compares the performance of two or more product or service versions A and B, and so on. The most effective A/B testing scenarios are designed to test one main variable (X, Y, count, color, completion rate, etc.). The testing monitors one or two results like level completion, or profitability, etc. For example, if we were to test two to four game versions at a time. We will distribute our game versions amongst our players. Usually, only two to four versions are tested at one given time. In this case, we create three different test cases for our game where we change the number of enemies in the game and see how many more players will complete the level ( in this case, we are testing level 4 of the game). Usually, you always want to test your original version in comparison to the new ones. So, we have an initial version, A, a version B with 10% fewer enemies, and version C with 20% fewer enemies. The next step is to distribute these versions to our players.

We distribute in a way that about a third of our players play each version. This is usually done using percentages. We have 33%, 33%, and 34%, which adds up to a total of 100% distributed to versions A, B, &C. Next, we get our results and see how many players completed each level. For instance, we might notice that only 20% of our Group A players completed level 4, 80% of Group B completed Level 4, and in Group C,70% completed Level 4. These results imply that both options B and C seem like better options than A. We could investigate further why C is not as high as B ( maybe it is too easy, and players get bored playing). Nonetheless, A/B testing results show that version B is our best option (it had the highest completion rate of 80%).

--

--