Handling Natural Languages
On this page
- The engine supports many languages by matching the text typed in the search box with the text found in the index.
- If you want the engine to recognize plurals and stop words like “the”, “a”, “of”, you’ll need to specify the language used in your index.
- The engine uses language-specific dictionaries to remove stop words, detect declined or pluralized forms, separate compound words, and handle Asian logograms (CJK), Arabic vowels, and diacritics.
Algolia supports many languages
The Algolia engine supports many languages out of the box. It’s language agnostic and matches the text in the search box with the text in the index—this is called textual matching.
For example, suppose you have an index with only English text and a user searches with Japanese. In that case, the engine won’t return any results because the Latin alphabet doesn’t match Japanese characters. If your users search in Japanese, your index should likely contain Japanese text. If you want to support multiple languages, the most common solution is to create one index per language.
Algolia uses a wide array of natural language techniques, ranging from general, such as finding words or using Unicode, to specific, including distinguishing letters from logograms, breaking down compound words, and using single-language dictionaries for vocabulary.
The following is split into several natural language understanding strategies:
- Engine-level processing (normalization)
- Configuring typo tolerance
- Natural language processing (NLP) with Rules
Some language-based techniques (such as normalization) play an integral role in the engine and are performed with every indexing and search operation. These are generally not configurable. Other techniques rely on specialized dictionaries, which facilitate word and word-root detection, and these come with several configurable options. Finally, according to the use case, Algolia offers many other techniques (like typo tolerance, Rules) that can be turned on, turned off, or fine-tuned. These are also configurable using Algolia’s API settings
How the engine normalizes data
The engine performs normalization both at indexing and query time, ensuring consistency in how your data is represented and matched.
You can’t globally turn off normalization, but you can do it for certain special characters using the
keepDiacriticsOnCharacters setting. Additionally, the normalization process is language-agnostic and applies equally to all supported languages.
What does normalization mean?
- Turn all characters to lower case
- Remove all diacritics, for example, accents, umlauts, Arabic short vowels-however, you can define diacritics to keep using the
- Remove punctuation within words, for example, apostrophes
- Manage punctuation between words
- Use word separators, such as spaces or other characters
- Include or exclude non-alphanumeric characters (separatorsToIndex)
- Transform traditional Chinese to modern
Some of these actions, such as removing punctuation within words, managing punctuation between words, and handling non-alphanumeric characters in general, are part of the tokenization process. Understanding how Algolia handles this process is key to understanding how Algolia concatenates and splits words at indexing and query time.
Adding language specificity using dictionaries
Some of these automated techniques don’t work in all languages.
No automatic language detection
Algolia doesn’t attempt to detect the language of your records or the language of your users as they type in queries.
Therefore, to benefit from language-specific algorithms, you need to tell the engine in what language you want your records to be interpreted.
- If you don’t pick a language, the engine assumes that you want to cover all supported languages. The drawback here is that you create ambiguities by mixing every language’s peculiarities. For example, plurals in Italian are applied to plurals in English, causing problems such as the following: “paste”, the plural of “pasta” in Italian, will also be considered the plural of “pasta” in English, which isn’t the case, as “paste” in English is a word in its own right (to spread).
- It’s okay to mix two or three languages in a single index and specify them in your settings. However, you should prepare your indices and records appropriately. For more on this, refer to the multiple languages tutorial.
Even though the engine can do most tasks without knowing the language of an index, some tasks that require knowledge of the language. For example, the engine can only compare plural to singular forms by knowing the language. The same applies to removing small words like “to” and “the” (stop words).
Because the default language of an index is all supported languages, enabling removeStopWords or ignorePlurals without setting an index’s language will ignore the wrong plurals and remove the wrong stop words. It’s therefore essential to set the query languages of all your indices.
Several of the language-related methods require the use of dictionaries. With dictionaries, the engine can apply language-specific, word-based logic to your data and your end user’s queries. Algolia maintains separate, language-specific dictionaries for:
- Removing stop words
- Detecting pluralized and other declined forms (alternative forms of words due to number, case, or gender)
- Splitting compound words (also known as decompounding)
- Handling Asian logograms (CJK)
Algolia provides default dictionaries for all supported query languages. While Algolia regularly updates these dictionaries, you can also customize the stop words dictionaries, declensions dictionaries, and decompounding dictionaries for your use case.
Typo tolerance and languages
What’s a typo?
- A missing letter in a word, “hllo” → “hello”
- An extraneous letter, “heello” → “hello”
- Inverted letters: “hlelo” → “hello”
- Substituted letter: “heilo” → “hello”
Typo tolerance allows users to make mistakes while typing and still find the words they’re looking for. This is done by matching words that are close in spelling.
Other spelling errors
Extra or missing spaces and punctuation doesn’t count as typos. Algolia only handles them if
typoTolerance is enabled (set to
strict). For example:
- A missing space between two words is handled with splitting: “helloworld” → “hello world”
- An extraneous space or punctuation is handled with concatenation: “hel lo” → “hello”
Typos as language-dependent
To illustrate the principle, English is a suitable language because it’s phonemic: it uses single characters to represent sounds to form a word. It makes spelling errors possible.
Algolia doesn’t support typo-tolerance for logogram-based languages (like Chinese hanzi and Japanese kanji), as these languages use pictorial characters to represent partial or complete words instead of single letters to represent sounds.
For alphabet-based and phonemic languages (like English, French, Russian), you can configure the engine in these ways to improve typo tolerance:
Disabling typo tolerance and prefix search on specific words
advancedSyntax parameter lets you turn off typo tolerance on specific words in a query by using double quotes. For example, the query “foot problems” is typo tolerant on both query words, while ““foot” problems” is only typo tolerant on “problems”.
This parameter also disables prefix searching on words inside the double quotes.
Natural language processing with Rules
You can set up Rules that tell the engine to look for specific words or phrases in a query and take a specific action or change its default behavior when it finds them.
For example, the engine can convert some query terms into filters. If a user types in a filter value—say, “red”—you can use this term as a filter instead of a search term. With the query “red dress”, then the engine could therefore only look at the “red” records (based on a filter attribute) for the word “dress”. Removing filter values from the query string and using them directly as filters is called dynamic filtering.
Dynamic filtering is only one way that Rules can understand and detect the intent of the user.