Guides / A/B testing

A/B testing uses Relevance Tuning and Analytics , meaning:

  • Relevance tuning lets you give your users the best search results.
  • Analytics makes relevance tuning data-driven, ensuring that your configuration choices are sound and effective.

Relevance tuning, however, can be tricky. The choices are not always obvious. It’s sometimes hard to know which settings to focus on and what values to set them to. It’s also hard to know if what you’ve done is useful or not. What you need is input from your users, to test your changes live.

This is what A/B Testing does. It lets you create two alternative search experiences with unique settings, put them both live, and see which one performs best.

Explore related videos in the Algolia Academy

Advantages of A/B testing

With A/B testing, you run alternative indices or searches in parallel, capturing click and conversion events to compare effectiveness.

You make small incremental changes to your main index or search and have those changes tested - live and transparently by your users - before making them official.

A/B testing goes directly to an essential source of information - your users - by including them in the decision-making process, in the most reliable and least burdensome way.

These tests are widely used in the industry to measure the usability and effectiveness of a website. Algolia’s focus is on measuring search and relevance: are your users getting the best search results? Is your search effective in engaging and retaining your users? Is it leading to more clicks, more sales, more activity for your business?

Implementing A/B testing

Algolia A/B Testing was designed with simplicity in mind. This user-friendliness enables you to perform tests regularly. Assuming you send click and conversion events, A/B Testing doesn’t require any coding intervention. It can be managed from start to finish by people with no technical background.

Collect clicks and conversions

To perform A/B testing, you need to send click and conversion events: this is the only way of testing how each of your variants is performing. While A/B testing itself doesn’t require coding, sending clicks and conversions does.

Set up the index or query

Algolia offers two kinds of A/B tests:

Run the A/B test

After creating or selecting your indices, you can start your A/B tests in two steps.

  1. Use the A/B test tab of the Algolia dashboard to create your test,
  2. Run the A/B test.

After letting your test run and collect analytics, you can review, interpret, and act on the results. You can then create new A/B tests, iteratively optimizing your search.

A/B testing in the Algolia dashboard

Access basic A/B testing analytics and create A/B tests from the Algolia dashboard’s A/B testing tab. Although the dashboard’s Search Analytics tab provides more detail, be aware that it includes searches of the A and the B variants.

Algolia automatically adds tags to A/B test indexes so that you can view detailed analytics for each variant. To access a variant’s analytics, click the analytics button to the right of each index description in the A/B test tab: this redirects you to the Analytics tab with the appropriate settings (time range and analyticsTags) applied.

The automatically added tags follow the structure alg#abtest:${abTestID}+variant:${variant}. For example, creating A/B test “42” with two variants results in the two tags alg#abtest:42+variant:1 and alg#abtest:42+variant:2.

Example A/B tests

Algolia offers two kinds of A/B tests:

  • Comparing different index settings
  • Comparing different search settings

For index-based testing, you can test:

  • Your index settings
  • Your data format

For search-based settings, you can test any search-time setting, including:

  • Typo tolerance
  • Rules
  • Optional filters

Example: changing your index settings

Add a new custom ranking with the number_of_likes attribute

You’ve recently offered your users the ability to like your items, which include music, films, and blog posts. You’ve gathered “likes” data, and you’d like to use this information to sort your search results.

Before you implement such a big change, if you want to make sure it improves your search, you can do this with A/B testing.

  1. Create your A/B test indices:
  2. Add a number_of_likes attribute to your main catalog index (this is variant A in your A/B test), and then create variant B as a replica of variant A.
  3. Adjust variant B’s settings by sorting it’s records based on number_of_likes.
  4. Name your test “Test new ranking with number of likes” in the A/B testing tab of the Dashboard.
  5. Set the test to run for 30 days. This is to ensure to get enough data and a good variety of searches. Set the date parameters accordingly.
  6. Set B at only 10% usage, because of the uncertainty of introducing a new sorting: you don’t want to change the user experience for too many users until you’re absolutely sure the change is desirable.
  7. When your test reaches 95% confidence or greater, see whether your change improves your search, and whether the improvement is large enough to justify implementation costs.

Example: reformatting your data

Add a new search attribute: short_description

Your company has added a new short description to each of your records.

To see if adding the short description as a searchable attribute improves your relevance, here’s what you need to do:

  1. Configure your test index (you only need one index for this test).
  2. Add a new searchable attribute short_description to your main catalog index. Because you want to test a search-time setting for your index, you can use your main index as both variants A and B in your test.
  3. Create an A/B test with the name “Testing the new short description” in the A/B testing tab of the Dashboard.
  4. Apply the restrictSearchableAttributes query parameter in variant B. Include all searchable attributes that you want to test, except short_description.
  5. Set the test length to 7 days. This assumes you have enough traffic so that 7 days of testing allows to form a reasonable conclusion.
  6. Direct 30% traffic to variant B for the same reason as example 1—because of the uncertainty: you’d rather not risk degrading an already good search with an untested attribute.
  7. Once you reach 95% confidence, you can judge the improvement and the cost of implementation to see whether this change is beneficial.

Example: turning rules on and off to compare a query with and without merchandising

You can use A/B testing to check the effectiveness of your rules. This example compares searching with rules enabled and search with rules turned off.

Your company has just received the new iPhone. You want this item to appear at the top of the list for all searches that contain “apple”, “iphone”, or “mobile”.

To use an A/B test to see whether putting the new iPhone at the top of your results encourages traffic and sales, here’s what you need to do:

  1. Create rules for your main catalog index (this is variant A in the test) that promotes your new iPhone record.
  2. Configure your test index (you only need one index for this test).
  3. Ensure that rules are enabled for variant A.
  4. Create a test with the name “Testing newly released iPhone merchandising” in the A/B testing tab of the dashboard.
  5. Create your A/B test with variant A as both variants.
  6. Add enableRules as the varying query parameter in one variant.
  7. Set the test length to 7 days. This assumes you have enough traffic so that 7 days of testing allows to form a reasonable conclusion.
  8. You give variant B only 30% usage (70/30) because of the uncertainty: you’d rather not risk degrading an already good search with an untested attribute.
  9. Once you reach 95% confidence, you can judge the improvement and the cost of implementation to see whether this change is beneficial.
Did you find this page helpful?