A/B Test Implementation Checklist
On this page
This guide is for anyone who is running A/B tests and wants to ensure a correct implementation. This can avoid issues like:
- Reaching significance on tests where nothing is changed between variants (A/A tests)
- None of the tests reaching statistical significance.
- A/B test that behaves the opposite way it’s anticipated to.
Send valid events
It’s important to send valid events to fully understand the effect your changes have on the performance of the index.
Identify your users
To run a successful A/B test the engine needs to be able to identify your individual users. You need to do this by setting a userToken
. It’s recommended to generate it in such a way that you can identify your user even on multiple devices. You could base it on your internal user ID, for example.
If you’re using InstantSearch or the API clients you can set the userToken
in the headers or search parameters.
By default, the insights client generates a userToken
for you, and stores it in a cookie on your site. If you don’t override this behavior, you need to send this token with every search to link that search to the user.
Either of these options enables the engine to identify a user and keep them on a single variant during the test.
Forwarding from server
If you search from your back end, it’s important to not use a server to forward queries from multiple users without differentiation, as all users are associated to one variant, which can skew results. To resolve this, you should forward the userToken
or users IP address.
If your plan gives access to it, you can leverage the Personalization implementation help page on the dashboard to confirm your queries have associated userToken
s.
Query parameters
Any parameters you send at query time overwrite those within the A/B test. For example, if you are A/B testing personalization and on variant B you set enablePersonalization
to true
as well as setting this with every query, personalization is enabled for both variants and nothing can be interpreted from the results.
Relevant searches
Part of your search implementation may send searches from a dashboard or internal page. You should exclude any searches not performed by your users from analytics by setting the analytics
parameter to false
. To exclude the search from A/B testing as well, set the enableABTest
to false
too.
Export A/B test data to external analytics platform
You can use the getRankingInfo
parameter to retrieve the A/B test ID and variant ID. This can help you to track user variants and behavior in third party tools you already use, like Google Analytics.
By default, the engine creates an analytics tag per A/B test variant, which you can export while filtering on these tags, to view statistics in other systems.
Avoiding outliers
There are a few steps you can take to prevent outliers, such as bots crawling your web pages and performing thousands of searches without any interactions. This can skew the data within the A/B test as they’re treated as users, which drastically affects the click through rate.
Using rate limits
You can set up API keys with rate limits to limit the number of API calls per hour and per IP address. The default value is 0
(no rate limit), you should set this value to avoid automated searches skewing A/B test results.
Using HTTP referrers
Most browsers send referrer URLs, either through the referer
or the Origin HTTP
header. Like all HTTP headers, it can be spoofed, so you shouldn’t rely on it to secure your data. For example, while referer
should always be valid when requesting from a browser, tools like cURL can override it. Some browser, like Brave, don’t send these headers at all.
Using robots.txt
Inform Googlebot to not go to your search pages at all with a well configured robots.txt. You can refer to this guide by Google. What some websites do is let Google go on their home search page but prevent them to go any further (like going on all the pages).