Guides / Getting insights and analytics / Leveraging analytics data / A/B testing

Why Do A/B Testing?

Relevance is the difference between having a search, and having a great search. The vast majority of our API methods and settings, as well as many of our dashboard features, are devoted to giving you direct control over the relevance of your results.

Though your first concern is likely getting your search up and running, your next step should be improving relevance: taking a step back and looking closer at the results: Do they match what you want to see? Are some items appearing too often, not often enough, too far down, or not at all?

These are questions of relevance. To get better results, you can:

  • Tweak the engine’s default settings (typo tolerance, language-handling, etc..),
  • Set up our data and attributes differently,
  • Add synonyms, Rules, filters, etc,
  • Re-order your results with custom ranking and alternative sorting strategies,
  • And more …

Relevance tuning can quickly get quite advanced, but it’s necessary if you want a great search experience.

But how do you know what to change? Though there are many ways to improve relevance, A/B testing is one of the best. These structured, controlled tests ensure that your experiments are carefully set up, that they make sense, and that the conclusions you draw from them are sound, reliable, and data-driven.

Did you find this page helpful?