Before deep diving on how to create a proper A/B test for your mobile app, we need first to define what is an A/B test and how it works.
A/B testing, in a very basic way, is a test to compare two or more versions of something to figure out which of them performs better. To do that, you need to segment your audience into different groups and see how they react to the versions of something that you are trying to test.
Let's use an example to illustrate that: let's say you want to increase the number of downloads of your app. You believe that the current creatives your company is running could be improved if the Call to Action was bigger and called more attention. So you decided to create an ad set of creatives with a centered button with the Call to Action that you normally use. You divide your target audience into two and expose them to the current creatives and the new ones. After several days, you get enough data to compare the results of both Ad sets, by confronting them you see that the ones that you’ve built not only have a higher click-through rate (CTR) but also a higher conversion rate (CR) since now the CTA explains what the user should do when seeing the ad and where the ad will drive them to. Done, you just performed an A/B test.
A/B tests in the mobile marketing environment are mostly done in two basic fields:
In-app: These tests are done in the app itself. Changing the app to a new UX or UI to improve the user’s usage metrics of the user like engagement, session time, retention, LTV…
App Campaigns: These A/B tests are done in the running campaigns seeking to improve the CTR, CR, Installs, etc… The example above is a clear example of an A/B test on an app campaign, an user acquisition one.
But what are the steps I need to follow to do a proper A/B test?
Hypothesis: First of all, you need to develop a Hypothesis, it is perhaps the most important part of the whole A/B test. A hypothesis is basically an assumption, an idea that is proposed that can be tested to see if it might be true. To come up with this idea normally you need to do a bit of research and analysis on what are your current marketing efforts then you can come up with a different hypothesis. Having a solid and smart hypothesis can be a game-changer and boost your company’s growth rate.
Segmentation: With your hypothesis defined and your variables set up, you can start the test on different audience samples. These audience samples need to be relatively similar and big enough in terms of sizes so the results can be reliable when you are going to analyze them. If the audiences are small, there is a high risk that results might be affected by chance. There are several tools online that help you build your audience in the right way and divide it into two (or more) groups A and B. MMPs normally have audience builder tools as a feature, if your app has one, you can contact them to check it.
Analysis: Once your test has been running for enough time and you already have the necessary data you can start doing the analysis part. Normally, this is the quickest part when you just compare the results delivered between the variants. It's recommended to look at all metrics of the test, they might provide some trends or interesting signs for your next tests as well.
Implement: If your results were positive, you can proceed with the implementation of the successful variant in your campaign or app. If the test was inconclusive or wrong, it doesn't mean that you lost time, you still learned what not to do and what does not work.
Repeat: A/B tests are continuous efforts, you need to try new hypotheses continuously to improve your app growth. So, once your test is finished, go ahead and try a new one. With time, your ability to create more relevant hypotheses will improve and also the impact of your tests.
A/B tests are key to learning more about your audience and what is relevant for them. This not only boosts your app growth but also frequent A/B tests allow you to gain an advantage against your competitors.