Search by Algolia
Easily integrate Algolia into native apps with FlutterFlow
engineering

Easily integrate Algolia into native apps with FlutterFlow

Algolia's advanced search capabilities pair seamlessly with iOS or Android Apps when using FlutterFlow. App development and search design ...

Chuck Meyer

Sr. Developer Relations Engineer

Algolia's search propels 1,000s of retailers to Black Friday success
e-commerce

Algolia's search propels 1,000s of retailers to Black Friday success

In the midst of the Black Friday shopping frenzy, Algolia soared to new heights, setting new records and delivering an ...

Bernadette Nixon

Chief Executive Officer and Board Member at Algolia

Generative AI’s impact on the ecommerce industry
ai

Generative AI’s impact on the ecommerce industry

When was your last online shopping trip, and how did it go? For consumers, it’s becoming arguably tougher to ...

Vincent Caruana

Senior Digital Marketing Manager, SEO

What’s the average ecommerce conversion rate and how does yours compare?
e-commerce

What’s the average ecommerce conversion rate and how does yours compare?

Have you put your blood, sweat, and tears into perfecting your online store, only to see your conversion rates stuck ...

Vincent Caruana

Senior Digital Marketing Manager, SEO

What are AI chatbots, how do they work, and how have they impacted ecommerce?
ai

What are AI chatbots, how do they work, and how have they impacted ecommerce?

“Hello, how can I help you today?”  This has to be the most tired, but nevertheless tried-and-true ...

Catherine Dee

Search and Discovery writer

Algolia named a leader in IDC MarketScape
algolia

Algolia named a leader in IDC MarketScape

We are proud to announce that Algolia was named a leader in the IDC Marketscape in the Worldwide General-Purpose ...

John Stewart

VP Corporate Marketing

Mastering the channel shift: How leading distributors provide excellent online buying experiences
e-commerce

Mastering the channel shift: How leading distributors provide excellent online buying experiences

Twice a year, B2B Online brings together America’s leading manufacturers and distributors to uncover learnings and industry trends. This ...

Jack Moberger

Director, Sales Enablement & B2B Practice Leader

Large language models (LLMs) vs generative AI: what’s the difference?
ai

Large language models (LLMs) vs generative AI: what’s the difference?

Generative AI and large language models (LLMs). These two cutting-edge AI technologies sound like totally different, incomparable things. One ...

Catherine Dee

Search and Discovery writer

What is generative AI and how does it work?
ai

What is generative AI and how does it work?

ChatGPT, Bing, Bard, YouChat, DALL-E, Jasper…chances are good you’re leveraging some version of generative artificial intelligence on ...

Catherine Dee

Search and Discovery writer

Feature Spotlight: Query Suggestions
product

Feature Spotlight: Query Suggestions

Your users are spoiled. They’re used to Google’s refined and convenient search interface, so they have high expectations ...

Jaden Baptista

Technical Writer

What does it take to build and train a large language model? An introduction
ai

What does it take to build and train a large language model? An introduction

Imagine if, as your final exam for a computer science class, you had to create a real-world large language ...

Vincent Caruana

Sr. SEO Web Digital Marketing Manager

The pros and cons of AI language models
ai

The pros and cons of AI language models

What do you think of the OpenAI ChatGPT app and AI language models? There’s lots going on: GPT-3 ...

Catherine Dee

Search and Discovery writer

How AI is transforming merchandising from reactive to proactive
e-commerce

How AI is transforming merchandising from reactive to proactive

In the fast-paced and dynamic realm of digital merchandising, being reactive to customer trends has been the norm. In ...

Lorna Rivera

Staff User Researcher

Top examples of some of the best large language models out there
ai

Top examples of some of the best large language models out there

You’re at a dinner party when the conversation takes a computer-science-y turn. Have you tried ChatGPT? What ...

Vincent Caruana

Sr. SEO Web Digital Marketing Manager

What are large language models?
ai

What are large language models?

It’s the era of Big Data, and super-sized language models are the latest stars. When it comes to ...

Catherine Dee

Search and Discovery writer

Mobile search done right: Common pitfalls and best practices
ux

Mobile search done right: Common pitfalls and best practices

Did you know that 86% of the global population uses a smartphone? The 7 billion devices connected to the Internet ...

Alexandre Collin

Staff SME Business & Optimization - UI/UX

Cloud Native meetup: Observability & Sustainability
engineering

Cloud Native meetup: Observability & Sustainability

The Cloud Native Foundation is known for being the organization behind Kubernetes and many other Cloud Native tools. To foster ...

Tim Carry

Algolia DocSearch is now free for all docs sites
product

Algolia DocSearch is now free for all docs sites

TL;DR Revamp your technical documentation search experience with DocSearch! Previously only available to open-source projects, we're excited ...

Shane Afsar

Senior Engineering Manager

Looking for something?

facebookfacebooklinkedinlinkedintwittertwittermailmail

As a SaaS company, our software and infrastructure must stay ahead of the competition. And it must be reliable: server & API availability are as important as innovation. In other words, while we need to deliver product updates and new features as quickly and as often as possible, we also need to build resilient software by testing everything – which takes away time from innovating.

Experience has taught us how to balance these competing needs: test in production.

Why do resilient testing in production?

Resilient testing in production is the process of deploying new code in production, and “testing” it using the production traffic directly instead of running an exhaustive test suite. This is risky. You usually want to avoid it, but as your application grows, it becomes impossible to test everything.

In the Algolia engine, we currently have more than twenty query parameters. If they were all boolean flags, testing all cases would require a million tests: twenty parameters, with two possible values per each, that’s 2^20 possibilities.

When it comes to the time that testing takes, there are three things to take into account:

  • Time to write the tests
  • Time to maintain the tests 
  • Time to run the tests

Writing a million tests is already a long process, but once they’re there, these tests become part of the project. As such, they require maintenance like the rest of the source code, so each software iteration now requires more work.

Now let’s imagine that your team is large enough to write and maintain those tests, you still need to run them. If each test only takes ten milliseconds, it adds up to 2h45m to run. Any update in the code now takes 2h45m to validate.

Our customers buy our product for the existing set of features, but also the upcoming services. They expect us to release new features regularly so we can help them grow, to be innovative and resilient. Therefore, we need to be efficient.

When we implement a new feature, we only write tests to validate the implementation and to detect obvious corner cases. To thoroughly verify the functionality, we decided to put a deployment setup in place that allows us to test in production with minimal impact for our customers in case of a bug. This way, we can deliver new features on time.

Real-life example

Let’s take a concrete example: we recently rewrote the highlighting feature of Algolia. If we wanted to test it exhaustively, this would mean combining the one million tests needed for all parameters with all the possible Unicode characters (over a million). This adds up to over a billion tests. If each test only took a millisecond to run, this test suite itself would take 11 days to complete.

We had to find a better solution. Therefore, instead of maintaining billions of tests and significantly slowing down our release process, we decided to test in production.

Our main concern was to avoid impacting customers, so we defined what we needed to put such a setup in place:

  • A progressive deployment process
  • A way to retry on a healthy infrastructure
  • Proactive issue detection

Anyone who subscribes to Algolia from our website gets access to a three-node cluster. Each node owns 100% of the data and can serve queries independently, which provides a resilient setup. When you consider an average availability of 95% with bare metal servers, this setup allows us to provide 99.987% availability. For your Algolia service to be completely down, each server would need to be down. The probability is, therefore, 5% of 5% of 5%, or 0.0125% of potential downtime.

Now, even with this setup, a software bug could cause a service outage. For that reason, we have put in place a progressive deployment process of the search engine which spans across three days. This partial deploy gives us enough time to detect issues.

If there’s a bug in the new version, our API clients take over with their retry strategy. If they target a broken node, they transparently retry with another one until they get a successful response. From the end user’s perspective, the issue is invisible.

When we deployed the rewrite of the highlighting feature, we detected a normalization issue. The goal was for it to convert text in a canonical form to simplify comparison between different inputs. Normalized forms are usually shorter or as long as the original form, and that’s what the highlighting code was expecting. However, there are some characters for which the normalized form is longer: the ß (German sharp S) is normalized into ss. During the rewrite, we added runtime pre-conditions checks to ensure the normalized size was shorter or equal, and the code behaved as expected. This is what revealed the bug.

When we deployed the new version on the first node, it immediately stopped responding to queries that were generating a longer normalized form. However, thanks to our API client’s retry strategy, none of our customers noticed it. On our end, our monitoring system triggered an alert, and we quickly reverted the change to stabilize the engine. It left us all the necessary time to properly understand the issue, write a proper fix, and the corresponding tests.

A suitable setup

There are three crucial elements that you need when you want to test in production:

  • A replicated infrastructure
  • A resilient software
  • A safe deployment strategy

Replicated infrastructure

Nowadays, configuring a replicated infrastructure is straightforward: all cloud providers offer load balancers in front of multiple VMs. In our case, we handle the replication of the search engine data at the cluster level, with each node owning 100% of the data. This way, nodes can reply to queries independently.

Resilient software

This part highly depends on the software you’re building. In the Algolia engine, we have plenty of health checks within the code to verify the pre-conditions of a function and expected state. When we come to an unexpected state, the engine stops processing to avoid returning corrupted data. It forces the API clients to retry on a different node transparently.

Safe deployment strategy

Last and not least, the deployment process’s main goal here is to release new versions while controlling risks progressively.

At Algolia, we’ve split our infrastructure into four environments: testing, staging, production, and safe. Each environment has a different SLA:

  • Testing contains clusters that we use internally. Breaking it only impacts the company.
  • Staging contains search clusters of public-facing projects that are maintained by Algolians.
  • Production contains the clusters of our customers.
  • Safe contains the clusters of our premium SLA customers.

In our deployment strategy, we leverage those different environments to mitigate risk. This means we first deploy on environments with the least impact for our customers.

In addition to the different environments, we also leveraged the replication factor of three, and created a deployment process in twelve steps:

  1. We deploy on all three nodes of the testing clusters.
  2. We deploy on the first node of the staging clusters, then on the first node of the production clusters, then on the first node of safe clusters.
  3. We wait one day.
  4. We deploy on the second node of the staging clusters, then on the second node of the production clusters, then on the second node of safe clusters.
  5. We wait one day.
  6. We deploy on the third node of the staging clusters, then on the third node of the production clusters, then on the third node of safe clusters.

The first step allows us to detect issues in the code that handles the distributed system and the interactions between nodes within a cluster.

Then, deploying on each node one by one lets us assess whether a cluster can support two different versions at the same time and if the code is stable.

The rationale behind waiting for a whole day before deploying on the next environment is that it leaves enough time to detect performance issues, data corruption, or time-based issues. Once we reach this step, we’ve detected most of the problems. The rest of the progressive deployment is here to help us identify uncaught breaking changes.

Whenever we detect a bug, we immediately revert the new version. This way, we can set the service back into a stable state, and this gives us enough time to fix the issue and add the corresponding tests correctly.

With this approach, our test suite is driven by customer usage. This makes it particularly efficient and allows us to release a new version of the engine every week to sustain the needs of our customers, even with such a large codebase.

Good is better than perfect

The startup ecosystem is challenging. Small teams need to find efficient strategies to create better products than big companies with large teams.

Regularly releasing new features is as important as having a stable product with a great user experience that answers the demand of customers. This need for efficiency is the reason why we must find a middle ground between writing enough tests to have good feature coverage, and being able to release often.

You should always be careful with adding more tests because they are time-consuming: it takes time to write them, maintain them, and run them. So how do you know when to write tests? The ninety-ninety rule applies: it’s usually straightforward to test 90% of a feature, and the last 10% takes as much time. That’s why it’s crucial to handle the remaining 10% based on customer usage, instead of pushing for exhaustive coverage.

To mitigate the risk, make sure you take the time to design a piece of software and an infrastructure that supports tests in production, with the least possible impact on customers.

About the author
Xavier Grand

Engineering Manager

Take ownership of your career and our business

View jobs
Take ownership of your career and our business

Recommended Articles

Powered byAlgolia Algolia Recommend

Algolia's top 10 tips to achieve highly relevant search results
product

Julien Lemoine

Co-founder & former CTO at Algolia

Good API Documentation Is Not About Choosing the Right Tool
engineering

Maxime Locqueville

DX Engineering Manager

Introducing our new navigation
product

Craig Williams

Director of Product Design & Research