Search by Algolia
How personalization boosts customer engagement
e-commerce

How personalization boosts customer engagement

You land on your favorite retailer’s website, where everything seems to be attractively arranged just for you. Your favorite ...

Jon Silvers

Director, Digital Marketing

What is retail analytics and how can it inform your data-driven ecommerce merchandising strategy?
e-commerce

What is retail analytics and how can it inform your data-driven ecommerce merchandising strategy?

There is such tremendous activity both on and off of retailer websites today that it would be impossible to make ...

Catherine Dee

Search and Discovery writer

8 ways to use merchandising data to boost your online store ROI
e-commerce

8 ways to use merchandising data to boost your online store ROI

New year, new goals. Sounds positive, but looking at your sales data, your revenue and profit aren’t so hot ...

John Stewart

VP, Corporate Communications and Brand

Algolia DocSearch + Astro Starlight
engineering

Algolia DocSearch + Astro Starlight

What is Astro Starlight? If you're building a documentation site, your content needs to be easy to write and ...

Jaden Baptista

Technical Writer

What role does AI play in recommendation systems and engines?
ai

What role does AI play in recommendation systems and engines?

You put that in your cart. How about this cool thing to go with it? You liked that? Here are ...

Catherine Dee

Search and Discovery writer

How AI can help improve your user experience
ux

How AI can help improve your user experience

They say you get one chance to make a great first impression. With visual design on ecommerce web pages, this ...

Jon Silvers

Director, Digital Marketing

Keeping your Algolia search index up to date
product

Keeping your Algolia search index up to date

When creating your initial Algolia index, you may seed the index with an initial set of data. This is convenient ...

Jaden Baptista

Technical Writer

Merchandising in the AI era
e-commerce

Merchandising in the AI era

For merchandisers, every website visit is an opportunity to promote products to potential buyers. In the era of AI, incorporating ...

Tariq Khan

Director of Content Marketing

Debunking the most common AI myths
ai

Debunking the most common AI myths

ARTIFICIAL INTELLIGENCE CAN’T BE TRUSTED, shouts the headline on your social media newsfeed. Is that really true, or is ...

Vincent Caruana

Senior Digital Marketing Manager, SEO

How AI can benefit the retail industry
ai

How AI can benefit the retail industry

Artificial intelligence is on a roll. It’s strengthening healthcare diagnostics, taking on office grunt work, helping banks combat fraud ...

Catherine Dee

Search and Discovery writer

How ecommerce AI is reshaping business
e-commerce

How ecommerce AI is reshaping business

Like other modern phenomena such as social media, artificial intelligence has landed on the ecommerce industry scene with a giant ...

Vincent Caruana

Senior Digital Marketing Manager, SEO

AI-driven smart merchandising: what it is and why your ecommerce store needs it
ai

AI-driven smart merchandising: what it is and why your ecommerce store needs it

Do you dream of having your own personal online shopper? Someone familiar and fun who pops up every time you ...

Catherine Dee

Search and Discovery writer

NRF 2024: A cocktail of inspiration and innovation
e-commerce

NRF 2024: A cocktail of inspiration and innovation

Retail’s big show, NRF 2024, once again brought together a wide spectrum of practitioners focused on innovation and transformation ...

Reshma Iyer

Director of Product Marketing, Ecommerce

How AI-powered personalization is transforming the user and customer experience
ai

How AI-powered personalization is transforming the user and customer experience

In a world of so many overwhelming choices for consumers, how can you best engage with the shoppers who visit ...

Vincent Caruana

Senior Digital Marketing Manager, SEO

Unveiling the future: Algolia’s AI revolution at NRF Retail Big Show
algolia

Unveiling the future: Algolia’s AI revolution at NRF Retail Big Show

Get ready for an exhilarating journey into the future of retail as Algolia takes center stage at the NRF Retail ...

John Stewart

VP Corporate Marketing

How to master personalization with AI
ai

How to master personalization with AI

Picture ecommerce in its early days: businesses were just beginning to discover the power of personalized marketing. They’d divide ...

Ciprian Borodescu

AI Product Manager | On a mission to help people succeed through the use of AI

5 best practices for nailing the ecommerce virtual assistant user experience
ai

5 best practices for nailing the ecommerce virtual assistant user experience

“Hello there, how can I help you today?”, asks the virtual shopping assistant in the lower right-hand corner ...

Vincent Caruana

Senior Digital Marketing Manager, SEO

Add InstantSearch and Autocomplete to your search experience in just 5 minutes
product

Add InstantSearch and Autocomplete to your search experience in just 5 minutes

A good starting point for building a comprehensive search experience is a straightforward app template. When crafting your application’s ...

Imogen Lovera

Senior Product Manager

Looking for something?

facebookfacebooklinkedinlinkedintwittertwittermailmail

From the beginning at Algolia, we decided not to place any load balancing infrastructure between our users and our search API servers. We made this choice to keep things simple, to remove any potential single point of failure and to avoid the costs of monitoring and maintaining such a system.

An Algolia application runs on top of the following infrastructure components:

  • a cluster of 3 servers which process both indexing and search queries,
  • some DSNs servers (not DNS). These are read-only replicas serving only search queries. Their primary purpose is to provide faster search to people located geographically far away from the main cluster.

Instead of putting hardware or software between our search servers and our users, we chose to rely on the round-robin feature of DNS to spread the load across the servers. Each Algolia application instance is associated with a unique DNS record, which responds in a round-robin fashion with one of the bare metal servers that handles the given Algolia app.

Load balancing

We consider the most common and optimal usage of Algolia to be with a front-end implementation. In this case, mobile devices or laptops directly establish communication with our bare metal servers. In such a context, we can assume there will be a significant amount of DNS resolution, each leading to a few search requests. This is the best situation to rely on round-robin DNS for load balancing: a large number of users request the DNS to access Algolia servers, and they perform a few searches. This leads to a server load that matches the round-robin DNS resolution. Additionally, to enforce even more DNS resolution, we decreased the DNS TTL to one minute.

Load balancing

In the end, this system was simple. It didn’t use any dedicated hardware or software to manage on our own, and things went pretty well.

That is, until Black Friday.

DNS-based load balancing limitations

Back-end implementation and uneven load balancing

As mentioned earlier, we strongly recommend our customers to go with front-end search implementations. Many parameters are motivating this choice; one of which is to leverage our DNS-based load balancing system. Yet, this isn’t always doable: some clients have specific constraints, like legacy design or security concerns, which lead them to opt for a back-end implementation. Doing so, their back-end servers relay all search queries to our infrastructure.

In this specific context, we already knew that our DNS-based load balancing was suboptimal:

  • Now, a small group of servers perform a few DNS resolutions and forward a considerable number of requests to the chosen back-end server. Instead of 1,000 users making 10 queries each, we now have 1 user making 10,000 queries.
  • As the sessions with our search servers can live longer, the back-end server can send even more requests without needing to re-perform DNS resolution.
  • Sometimes, the customer servers even override our DNS TTL so they can use their DNS cache longer.

That said, the main focus we had when we designed our infrastructure was resilience. This means that, for most customers, a single cluster node can handle all the search load. Consequently, an uneven load across the cluster nodes wouldn’t have any impact on the search experience.

DSN for horizontal scaling

Initially, the DSNs were introduced to increase performance for users who perform search requests far away from the main cluster, by bringing read-only servers near them. Yet, we soon realized that it was also an easy way to bring more search capacity in a given region, by scaling the servers horizontally to absorb more search requests.

The Black Friday Incident

We had a big customer with a back-end implementation for which the load was too big to be handled by a single server. We had already deployed many DSNs in addition to the cluster, all in the same region, to absorb the search load coming from their back-end servers.

Yet, when Black Friday arrived, they started to experience an increased number of search queries. Even if we had worked on dimensioning the infrastructure to absorb the load, they ended up in a situation with slow search queries and even some failing ones. For end users, this meant a highly degraded search experience with increased latency, during a time of the year when you expect an e-commerce website to be highly performant.

The load was uneven: the total number of available servers on our side to handle their requests outnumbered the number of servers on their side able to send requests. We ended up in a situation where, in the best case scenario, with our DNS-based load balancing, each of their servers would choose one of ours and stick to it for few minutes, overloading it, and leaving a few others not used at all.

Load balancing

This made us reconsider our DNS-based load balancing method, at least in this specific use case which combines heavy search load with back-end implementation.

Here comes the Load Balancer

First iteration

To solve the issue during Black Friday, we went for a quick fix, and we deployed a rudimentary load balancer. We leveraged Nginx, and its ability to proxy requests and load balance them toward a group of upstream servers (in our case, the Algolia servers).

Load balancing

We saved the day, and the traffic was evenly load balanced. This confirmed we needed such a system in some cases. Yet, at this point, it was more a workaround than an actual long-term solution. The whole thing was mainly static, with customer-specific parameters hardcoded in the Nginx configuration. This situation raised many interrogations:

  • How to make such a system customer-agnostic?
  • How to dynamically target the right group of search API servers for a given incoming request?
  • How to make it handle our daily infrastructure operations like changing, adding, or removing servers over time?

Second iteration

For the second iteration, the focus was to find a way to make the load balancer generic. The primary challenge was to dynamically build the list of upstream servers able to serve an incoming request. To solve this kind of issue, you can think of two opposite approaches:

  • either the load balancers know in advance all the information they need to operate,
  • or they learn what they need to know when they handle the incoming requests.

We went for the second solution, mostly because the total amount of data we would have to go through for each request was too significant and impactful to keep a low latency on search requests. We implemented a slow learning workflow, to try and make everything as simple as possible, and avoid to manage a complicated and huge distributed data store system.

Each time the load balancer receives a request from a customer it doesn’t already know about, it goes through a slower process to get the list of upstream servers associated with this customer. All the following requests for the same customer are handled much faster, as they then fetch the needed upstream information directly from the local cache.

Load balancing

We tried several technical solutions to achieve this:

  • HAProxy offers a Lua support for dynamic configuration, but from what we tested, it was too limited for our use case.
  • Envoy was (and still is) quite promising but the learning curve is pretty steep, and even though we managed to make a working PoC, their current load balancing algorithms are too restrictive for our long-term vision.
  • We tried to make a custom load balancer in Go. The PoC was working fine, but it remains difficult to assess the level of security and performance of such a solution on our own. It’s also a lot harder to maintain.
  • We finally tried OpenResty, which is Nginx-based and lets you run custom Lua code at different steps of the requests processing. It has a quite well-developed community, there are a bunch of available modules, either official or community-driven, and the documentation is good.

We decided to go with OpenResty. We combined it with Redis for the caching part, as OpenResty offers a convenient module to interact with Redis:

Load balancing

With this iteration, we managed to make our load balancer more scalable and easily maintainable by finding mechanisms to remove any static configuration from it. Yet still, a few things were missing to make it production-proof:

  • How to make sure it correctly and transparently handles upstream server failures?
  • How to make sure we can still operate changes on the infrastructure, as we do daily?
  • What happens if it can no longer access our internal API?

Third (and current) iteration

In the third and latest implementation, we introduced some mechanisms to make the whole system more failure-proof.

In addition to OpenResty handling the load balancing logic, and Redis caching the dynamic data, we added lb-helper, a custom Go daemon.

The complete load balancer now looks like this:

Load balancing

The lb-helper daemon has two different roles:

  • Abstract our internal API. OpenResty learns about the upstream servers through the local lb-helper, which periodically fetch data from our internal API. If the load balancer fails to connect to our internal API, it can still operate with potential slightly outdated data.
  • Manage failures. Each time an upstream server fails more than 10 times in a row, we consider it as down and remove it from the active cache. From there, the lb-helper probes the down upstream to check whether it’s back or not.

Bottom line

Today, we still mainly rely on our DNS-based load balancing, as it fits 99% of our use cases. That said, we’re now also aware that this approach has some limitations in certain situations, such as customers with back-end implementations combined to a heavy search load. In such a context, deploying a set of our load balancers brings back an even load on the search infrastructure.

Load balancing
Requests per second distribution over time for a set of servers, first without, then with a load balancer.

Also, these experiments showed us that we built much more than a simple load balancing device. It brings an abstraction layer on top of our search infrastructure, making failures, infrastructure changes or scaling almost fully transparent to our customers.

As we’re currently working on the fourth iteration, we’re attempting to introduce a latency-based algorithm to replace the current round-robin. The long-term plan is to check whether we can bring a worldwide abstraction layer on top of our search infrastructure. Yet, trying to go global at this scale brings a new set of constraints. That’s a topic for another blog post!

About the author
Paul Berthaux

Sr. Site Reliability Engineer

Algolia infrastructure

More info
Algolia infrastructure

Recommended Articles

Powered byAlgolia Algolia Recommend

Speeding up our Crawler parallel processing by 50 percent
engineering

Samuel Bodin

Software Engineer Crawler

Salt Incident: May 3rd 2020 Retrospective and Update
engineering

Julien Lemoine

Co-founder & former CTO at Algolia

A Journey Into SRE
engineering

Sergio Galvan

Site Reliability Engineer