A/B Testing FAQ
On this page
- 1. Can I run an A/B/C test? In other words, can I run two A/B tests on the same index at the same time?
- 2. Can I A/B test different user interface elements such as fonts, styles, buttons, and language?
- 3. How long do I need to run a test for?
- 4. How should I determine my traffic split?
- 5. How can I see the in-depth analytics for each variant?
- 6. How can I tell which variant was used for a particular search?
- 7. Can I * force * a variant for certain searches?
- 8. Can I use metrics other than clicks and conversion?
- 9. I selected an x/y split, but that isn’t reflected in the searches/users for each variant. Why?
- 10. Can I extend an A/B test?
Can I run an A/B/C test? In other words, can I run two A/B tests on the same index at the same time?
You can’t have multiple tests running on the same index at the same time. But, you can run more than one A/B test simultaneously if the indices are different for each A/B test.
Can I A/B test different user interface elements such as fonts, styles, buttons, and language?
Use A/B testing to analyze relevance configuration changes and to measure record-specific data. It’s not intended to test user interface elements.
How long do I need to run a test for?
The most important element for accurate A/B test results is that you run your tests over two full business cycles at least, to account for seasonality. For example, if you see a predictable conversion trend over a week, you should set the test to last at least two weeks. A shorter period might not include seasonality effects and end with biased results.
Though not recommended, you can always stop a test early.
Low-usage sites should run their tests longer than high-usage websites.
Ensure that your A/B test includes large samples of data, necessary to reach a confidence threshold of 95%. Achieving this 95% confidence threshold shows that your test is statistically significant.
How should I determine my traffic split?
There is no set rule, but a 50/50 split between both variants is a good default. If you’re testing a non-standard variant with a sizeable difference from your standard, you can start with less traffic on the non-standard variant, but it takes longer to converge.
How can I see the in-depth analytics for each variant?
Algolia automatically adds tags to A/B test indexes. In the A/B test tab in your dashboard, click on the Analytics icon in the same row as your listed index. The icon automatically redirects you to the Analytics tab with the appropriate settings with the analyticsTags
applied. The display shows detailed analytics for your variant.
How can I tell which variant was used for a particular search?
The abTestVariantID
field in the JSON response of any search run with an A/B test enabled indicates which variant was used for a particular search. For this, set getRankingInfo
to true
in the InstantSearch API.
Enabling this setting introduces performance penalties.
Can I * force * a variant for certain searches?
No, you can’t force a variant for a search.
Can I use metrics other than clicks and conversion?
While Algolia provides click-through rate and conversion rate as default metrics, all searches impacted by an A/B test return analytics for all exposed metrics at the variant level. You could also conduct further analysis based on your own metrics and business intelligence.
I selected an x/y split, but that isn’t reflected in the searches/users for each variant. Why?
If you selected an x/y split and it isn’t reflected in the searches or users for each variant, investigate further. For example:
- Your variant search may be performing much worse than expected, and your users are struggling to find things.
- Unexpected traffic, like a bot, may be affecting the A/B test. Turn off A/B testing before looking for unexpected traffic: set
enableABTest
tofalse
.
Can I extend an A/B test?
You can’t edit or extend an A/B test once it has launched. You also can’t change the name or other values after launch.