Search by Algolia
Removing outliers for A/B search tests
engineering

Removing outliers for A/B search tests

How do you measure the success of a new feature? How do you test the impact? There are different ways ...

Christopher Hawke

Senior Software Engineer

Easily integrate Algolia into native apps with FlutterFlow
engineering

Easily integrate Algolia into native apps with FlutterFlow

Algolia's advanced search capabilities pair seamlessly with iOS or Android Apps when using FlutterFlow. App development and search design ...

Chuck Meyer

Sr. Developer Relations Engineer

Algolia's search propels 1,000s of retailers to Black Friday success
e-commerce

Algolia's search propels 1,000s of retailers to Black Friday success

In the midst of the Black Friday shopping frenzy, Algolia soared to new heights, setting new records and delivering an ...

Bernadette Nixon

Chief Executive Officer and Board Member at Algolia

Generative AI’s impact on the ecommerce industry
ai

Generative AI’s impact on the ecommerce industry

When was your last online shopping trip, and how did it go? For consumers, it’s becoming arguably tougher to ...

Vincent Caruana

Senior Digital Marketing Manager, SEO

What’s the average ecommerce conversion rate and how does yours compare?
e-commerce

What’s the average ecommerce conversion rate and how does yours compare?

Have you put your blood, sweat, and tears into perfecting your online store, only to see your conversion rates stuck ...

Vincent Caruana

Senior Digital Marketing Manager, SEO

What are AI chatbots, how do they work, and how have they impacted ecommerce?
ai

What are AI chatbots, how do they work, and how have they impacted ecommerce?

“Hello, how can I help you today?”  This has to be the most tired, but nevertheless tried-and-true ...

Catherine Dee

Search and Discovery writer

Algolia named a leader in IDC MarketScape
algolia

Algolia named a leader in IDC MarketScape

We are proud to announce that Algolia was named a leader in the IDC Marketscape in the Worldwide General-Purpose ...

John Stewart

VP Corporate Marketing

Mastering the channel shift: How leading distributors provide excellent online buying experiences
e-commerce

Mastering the channel shift: How leading distributors provide excellent online buying experiences

Twice a year, B2B Online brings together America’s leading manufacturers and distributors to uncover learnings and industry trends. This ...

Jack Moberger

Director, Sales Enablement & B2B Practice Leader

Large language models (LLMs) vs generative AI: what’s the difference?
ai

Large language models (LLMs) vs generative AI: what’s the difference?

Generative AI and large language models (LLMs). These two cutting-edge AI technologies sound like totally different, incomparable things. One ...

Catherine Dee

Search and Discovery writer

What is generative AI and how does it work?
ai

What is generative AI and how does it work?

ChatGPT, Bing, Bard, YouChat, DALL-E, Jasper…chances are good you’re leveraging some version of generative artificial intelligence on ...

Catherine Dee

Search and Discovery writer

Feature Spotlight: Query Suggestions
product

Feature Spotlight: Query Suggestions

Your users are spoiled. They’re used to Google’s refined and convenient search interface, so they have high expectations ...

Jaden Baptista

Technical Writer

What does it take to build and train a large language model? An introduction
ai

What does it take to build and train a large language model? An introduction

Imagine if, as your final exam for a computer science class, you had to create a real-world large language ...

Vincent Caruana

Sr. SEO Web Digital Marketing Manager

The pros and cons of AI language models
ai

The pros and cons of AI language models

What do you think of the OpenAI ChatGPT app and AI language models? There’s lots going on: GPT-3 ...

Catherine Dee

Search and Discovery writer

How AI is transforming merchandising from reactive to proactive
e-commerce

How AI is transforming merchandising from reactive to proactive

In the fast-paced and dynamic realm of digital merchandising, being reactive to customer trends has been the norm. In ...

Lorna Rivera

Staff User Researcher

Top examples of some of the best large language models out there
ai

Top examples of some of the best large language models out there

You’re at a dinner party when the conversation takes a computer-science-y turn. Have you tried ChatGPT? What ...

Vincent Caruana

Sr. SEO Web Digital Marketing Manager

What are large language models?
ai

What are large language models?

It’s the era of Big Data, and super-sized language models are the latest stars. When it comes to ...

Catherine Dee

Search and Discovery writer

Mobile search done right: Common pitfalls and best practices
ux

Mobile search done right: Common pitfalls and best practices

Did you know that 86% of the global population uses a smartphone? The 7 billion devices connected to the Internet ...

Alexandre Collin

Staff SME Business & Optimization - UI/UX

Cloud Native meetup: Observability & Sustainability
engineering

Cloud Native meetup: Observability & Sustainability

The Cloud Native Foundation is known for being the organization behind Kubernetes and many other Cloud Native tools. To foster ...

Tim Carry

Looking for something?

facebookfacebooklinkedinlinkedintwittertwittermailmail

In previous blog posts, we have discussed the high-level architecture of our search engine and our worldwide distributed infrastructure. Now we would like to dive a little deeper into the Algolia search engine to explain why we implemented it from scratch instead of building upon an existing open-source engine.

We have many different reasons for doing so and want to provide ample context for each, so we have split “Inside the Algolia engine” into several posts. As you learn more about our search engine, please let us know if there’s anything you would like us to address in future posts.

If you have ever worked on a search engine with significant traffic and indexing, you are undoubtedly familiar with the problem of trying to fine-tune your indexing to avoid negatively affecting search performance. Part one of this series will focus on one of the quintessential problems with search engines—the impact of indexing on search queries—and our approach to solving it.

Why does indexing impact search performance?

Indexing impacts search performance because indexing and search share two critical resources—CPU and Disk. More specifically:

  1. Both indexing and search are very CPU intensive and compete for available resources. Imagine if you experience a sudden spike in search queries while simultaneously needing to run a large number of indexing operations. You run a significant risk of not having enough CPU to handle both.
  2. Both indexing and search perform a lot of disk I/Os. Search often performs a large number of read operations on the disk because the data is not always stored in memory, and indexing performs a large number of both read and write operations to the disk. There is also a battle for disk resources, even on high-end SSD drives.

The obvious way to solve this problem is to try to reduce or remove the conflicts of access to the shared resources.

Classical approaches to solving this issue

There are a lot of different approaches to dealing with this issue, and the majority fall into one of the following three categories:

  1. Update the data in batches during a regularly scheduled, controlled period of low traffic. This approach works well when your users are located in a specific country but doesn’t really work if your users are distributed worldwide. More importantly, this approach prevents frequent updating of the index, which means that searches will often be performed on outdated data.
  2. Use different machines for indexing and search. In this approach, a specific set of machines is used for indexing, and the generated files are copied onto the search machines. This is pretty complex to set up but has the added benefit of removing the indexing CPU load from the search machine. Unfortunately, this does not solve the problem of shared I/O, and your indexing will be bound by the network, especially when a compaction is triggered (an operation performed on a regular basis by the search engine to aggregate all incremental updates, also called optimization). The main drawback to this approach is the substantial impact on indexing speed as a sizable delay is introduced between when an update is made and when that update is actually available to be searched.
  3. Throttle the indexing speed via a priority queue. The goal of this approach is to limit the number of indexing operations in order to minimize the impact indexing has on search performance. Throttling introduces a delay in indexing speed that is difficult to measure, especially when coupled with a priority. The compaction described in the previous paragraph can also worsen the delay by causing a cumulative slow down effect on the indexing. This approach slows down indexing while also making it very difficult to avoid impacting search performance, especially during the compaction phases.

While complex to implement, the second approach of using different machines for indexing and search is a good solution if indexing performance is not crucial to you. The other two approaches only partially solve the issue as search remains impacted. Realistically, none of these approaches appropriately solves the problem of indexing affecting search performance because either indexing performance, search performance or both end up suffering.

How we solved the race for CPU resources

By splitting the indexing and search into different application processes!

At Algolia, indexing and search are divided into two different application processes with different scheduling priorities. Indexing has a lower CPU priority than search based on a higher nice level (Nice is a tool for modifying CPU priority on Unix-like operating systems). If there is not enough CPU to serve both indexing and search, priority is given to search queries and indexing is slowed down. You can keep your hardware architecture designed to handle both by simply slowing down indexing in the case of a big spike in search queries.

As is the case with using different machines for indexing and search, separating them into different application processes introduces some complexity; for example, the publication of new data for search becomes a multi-process commit.

This problem is pretty common and can easily be solved with the following sequence:

  1. When the indexing process receives new jobs, build a new incremental update of the index.
  2. Commit the incremental update to the disk.
  3. Notify all search processes of the data changes to the disk.
  4. Redirect new queries to the new data-structure. Currently processing queries remain served by the old data-structure.
  5. When no more queries are processing on the old data-structure, erase it from the disk.

This approach solves the problem of needing to share and prioritize CPU resources between indexing and search but is unfortunately something that most search engines on the market today cannot implement because indexing and search are executed in the same process.

How we solved the race for disk resources

The race for disk resources is a bit more complex to solve. First, we configured our kernel I/O scheduler to assign different priorities to read and write operations via the custom expiration timeout settings within the Linux deadline scheduler. (Read operations expire after 100ms, write operations expire after 10s). Those settings gave us a nudge in the right direction, but this is still far from perfect because the indexing process performs a lot of read operations.

The best way to address the contention for finite disk resources is to make sure the search process does not perform any disk operations, which means that all the data needs to remain in memory. This may seem obvious, but it is the only way to ensure the speed of your search engine is not impacted by indexing operations. It may also seem a bit crazy in terms of costs (having to buy additional memory), but the allocated memory can actually handle the vast majority of use cases without issue. We of course have some users that want to optimize costs for huge amounts of data, but this makes up a very small percentage of our users (less than 1%) and is addressed on an individual basis.

Default to fast and reliable

Everything at Algolia is designed with speed and reliability in mind—your data is stored in memory and synced on a high-end SSD and at least three different servers for high availability. Our ultimate goal is to remove all of the pains associated with building a great search feature, and solving the dependency between indexing and search was a very important step in getting there!

We take a lot of pride in building the best possible product for our customers and hope this post gives you some insight into the inner workings of our engine and how we got where we are today. As always, we would love your feedback. Definitely leave us a comment if you have any questions or ideas for the next blog in the series.

We recommend to read the other posts of this series:

About the author
Julien Lemoine

Co-founder & former CTO at Algolia

githublinkedintwitter

14-day free trial

Create a full-featured search experience in no time.

Get started
14-day free trial

Recommended Articles

Powered byAlgolia Algolia Recommend

An Exploration of Search and Indexing: Fast Indexing Scenarios
product

Peter Villani

Sr. Tech & Business Writer

Inside the Algolia Engine Part 2 — The Indexing Challenge of Instant Search
engineering

Julien Lemoine

Co-founder & former CTO at Algolia

What is a search index and how does it work?
product

Adam Smith

Sr. Director, Digital Marketing