From the beginning at Algolia, we decided not to place any load balancing infrastructure between our users and our search API servers. We made this choice to keep things simple, to remove any potential single point of failure and to avoid the costs of monitoring and maintaining such a system.
An Algolia application runs on top of the following infrastructure components:
Instead of putting hardware or software between our search servers and our users, we chose to rely on the round-robin feature of DNS to spread the load across the servers. Each Algolia application instance is associated with a unique DNS record, which responds in a round-robin fashion with one of the bare metal servers that handles the given Algolia app.
We consider the most common and optimal usage of Algolia to be with a front-end implementation. In this case, mobile devices or laptops directly establish communication with our bare metal servers. In such a context, we can assume there will be a significant amount of DNS resolution, each leading to a few search requests. This is the best situation to rely on round-robin DNS for load balancing: a large number of users request the DNS to access Algolia servers, and they perform a few searches. This leads to a server load that matches the round-robin DNS resolution. Additionally, to enforce even more DNS resolution, we decreased the DNS TTL to one minute.
In the end, this system was simple. It didn’t use any dedicated hardware or software to manage on our own, and things went pretty well.
That is, until Black Friday.
As mentioned earlier, we strongly recommend our customers to go with front-end search implementations. Many parameters are motivating this choice; one of which is to leverage our DNS-based load balancing system. Yet, this isn’t always doable: some clients have specific constraints, like legacy design or security concerns, which lead them to opt for a back-end implementation. Doing so, their back-end servers relay all search queries to our infrastructure.
In this specific context, we already knew that our DNS-based load balancing was suboptimal:
That said, the main focus we had when we designed our infrastructure was resilience. This means that, for most customers, a single cluster node can handle all the search load. Consequently, an uneven load across the cluster nodes wouldn’t have any impact on the search experience.
Initially, the DSNs were introduced to increase performance for users who perform search requests far away from the main cluster, by bringing read-only servers near them. Yet, we soon realized that it was also an easy way to bring more search capacity in a given region, by scaling the servers horizontally to absorb more search requests.
We had a big customer with a back-end implementation for which the load was too big to be handled by a single server. We had already deployed many DSNs in addition to the cluster, all in the same region, to absorb the search load coming from their back-end servers.
Yet, when Black Friday arrived, they started to experience an increased number of search queries. Even if we had worked on dimensioning the infrastructure to absorb the load, they ended up in a situation with slow search queries and even some failing ones. For end users, this meant a highly degraded search experience with increased latency, during a time of the year when you expect an e-commerce website to be highly performant.
The load was uneven: the total number of available servers on our side to handle their requests outnumbered the number of servers on their side able to send requests. We ended up in a situation where, in the best case scenario, with our DNS-based load balancing, each of their servers would choose one of ours and stick to it for few minutes, overloading it, and leaving a few others not used at all.
This made us reconsider our DNS-based load balancing method, at least in this specific use case which combines heavy search load with back-end implementation.
To solve the issue during Black Friday, we went for a quick fix, and we deployed a rudimentary load balancer. We leveraged Nginx, and its ability to proxy requests and load balance them toward a group of upstream servers (in our case, the Algolia servers).
We saved the day, and the traffic was evenly load balanced. This confirmed we needed such a system in some cases. Yet, at this point, it was more a workaround than an actual long-term solution. The whole thing was mainly static, with customer-specific parameters hardcoded in the Nginx configuration. This situation raised many interrogations:
For the second iteration, the focus was to find a way to make the load balancer generic. The primary challenge was to dynamically build the list of upstream servers able to serve an incoming request. To solve this kind of issue, you can think of two opposite approaches:
We went for the second solution, mostly because the total amount of data we would have to go through for each request was too significant and impactful to keep a low latency on search requests. We implemented a slow learning workflow, to try and make everything as simple as possible, and avoid to manage a complicated and huge distributed data store system.
Each time the load balancer receives a request from a customer it doesn’t already know about, it goes through a slower process to get the list of upstream servers associated with this customer. All the following requests for the same customer are handled much faster, as they then fetch the needed upstream information directly from the local cache.
We tried several technical solutions to achieve this:
We decided to go with OpenResty. We combined it with Redis for the caching part, as OpenResty offers a convenient module to interact with Redis:
With this iteration, we managed to make our load balancer more scalable and easily maintainable by finding mechanisms to remove any static configuration from it. Yet still, a few things were missing to make it production-proof:
In the third and latest implementation, we introduced some mechanisms to make the whole system more failure-proof.
In addition to OpenResty handling the load balancing logic, and Redis caching the dynamic data, we added lb-helper, a custom Go daemon.
The complete load balancer now looks like this:
The lb-helper daemon has two different roles:
Today, we still mainly rely on our DNS-based load balancing, as it fits 99% of our use cases. That said, we’re now also aware that this approach has some limitations in certain situations, such as customers with back-end implementations combined to a heavy search load. In such a context, deploying a set of our load balancers brings back an even load on the search infrastructure.
Also, these experiments showed us that we built much more than a simple load balancing device. It brings an abstraction layer on top of our search infrastructure, making failures, infrastructure changes or scaling almost fully transparent to our customers.
As we’re currently working on the fourth iteration, we’re attempting to introduce a latency-based algorithm to replace the current round-robin. The long-term plan is to check whether we can bring a worldwide abstraction layer on top of our search infrastructure. Yet, trying to go global at this scale brings a new set of constraints. That’s a topic for another blog post!
Paul Berthaux
Sr. Site Reliability EngineerPowered by Algolia AI Recommendations