5 considerations for Black Friday 2023 readiness
It’s hard to imagine having to think about Black Friday less than 4 months out from the previous one ...
Chief Strategic Business Development Officer
It’s hard to imagine having to think about Black Friday less than 4 months out from the previous one ...
Chief Strategic Business Development Officer
What happens if an online shopper arrives on your ecommerce site and: Your navigation provides no obvious or helpful direction ...
Search and Discovery writer
In part 1 of this blog-post series, we looked at app interface design obstacles in the mobile search experience ...
Sr. SEO Web Digital Marketing Manager
In part 1 of this series on mobile UX design, we talked about how designing a successful search user experience ...
Sr. SEO Web Digital Marketing Manager
Welcome to our three-part series on creating winning search UX design for your mobile app! This post identifies developer ...
Sr. SEO Web Digital Marketing Manager
National No Code Day falls on March 11th in the United States to encourage more people to build things online ...
Consulting powerhouse McKinsey is bullish on AI. Their forecasting estimates that AI could add around 16 percent to global GDP ...
Chief Revenue Officer at Algolia
How do you sell a product when your customers can’t assess it in person: pick it up, feel what ...
Search and Discovery writer
It is clear that for online businesses and especially for Marketplaces, content discovery can be especially challenging due to the ...
Chief Product Officer
This 2-part feature dives into the transformational journey made by digital merchandising to drive positive ecommerce experiences. Part 1 ...
Director of Product Marketing, Ecommerce
A social media user is shown snapshots of people he may know based on face-recognition technology and asked if ...
Search and Discovery writer
How’s your company’s organizational knowledge holding up? In other words, if an employee were to leave, would they ...
Search and Discovery writer
Recommendations can make or break an online shopping experience. In a world full of endless choices and infinite scrolling, recommendations ...
Algolia sponsored the 2023 Ecommerce Site Search Trends report which was produced and written by Coleman Parkes Research. The report ...
Chief Strategic Business Development Officer
You think your search engine really is powered by AI? Well maybe it is… or maybe not. Here’s a ...
Chief Revenue Officer at Algolia
You looked at this scarf twice; need matching mittens? How about an expensive down vest? You watched this goofy flick ...
Sr. SEO Web Digital Marketing Manager
“I can’t find it.” Sadly, this conclusion is often still part of the modern enterprise search experience. But ...
Sr. SEO Web Digital Marketing Manager
Search can feel both simple and complicated at the same time. Searching on Google is simple, and the results are ...
Chief Product Officer
Jul 5th 2012 engineering
At one time or another, most developers come across bugs or problems with Unicode (about 3,720,000 results on google for the request unicode bug developer at the time of this writing). Let me tell you about my experience in the last decade and why we have now implemented our own unicode Library to produce exactly the same result across devices/languages.
I first started to use Unicode in 2004 when I was developing a Text Mining software specialized on information extraction. This software was fully implemented in C++ and I used IBM ICU library to be Unicode compliant (all strings were stored in UTF16). I also used some normalization functions of ICU based on decomposition, but I did not notice any major problem at that time. I started to understand the dark side of Unicode later when I used it in other languages like Java, Python, and later in Objective-C. My first surprise was when I understood that a simple isAlpha(unicodechar c) method can return different results!
I started to look in details at the standard and downloaded UnicodeData.txt (the file that contains most of the information about the standard, you can grab the latest version here).
This file contains descriptions of all Unicode characters. Third column represents “General Category” and is documented as:
General CategoriesThe values in this field are abbreviations for the following. Some of the values are normative, and some are informative. For more information, see the Unicode Standard. Normative Categories
Informative Categories
|
As you can see there is quite a lot of categories, some of them are very easy to understand like “Lu” (Letter, uppercase) and “Ll” (Letter, lowercase) but some of them are more complex like “Lo” (Letter, other) and “No” (Number, other), and this is exactly where the first problem begins.
Let’s take the unicode character U+00BD(½) as an example. It is quite common to describe spare parts and is defined as “No”… except that some unicode libraries consider that this is not a number and return false to isNumber(unicodeChar) method (e.g., Objective-C).
In fact the two most used methods, isAlpha(unicodeChar) and isNumber(unicodeChar), are not directly defined by the Unicode standard and are subject to interpretation.
The consequence is that results are not the same across devices/languages! In our case this is a problem because our compiled index is portable, and we want to have exactly the same results on different devices/languages.
However, this is not the only problem! Unicode normalization is also a tricky topic. The Unicode standard defines a way to decompose characters (Characters decomposition mapping), for example U+00E0(à) which is decomposed as U+0061(a) + U+0300( ̀). But most of the time you do not want a decomposition but a normalization: get the most basic form of a string (lowercase without accents, marks, …). This is key to be able to search and compare words. For example, the normalization of the French word “Hétérogénéité” will be normalized as “heterogeneite”.
To compute this normalized form, most people compute the lowercase form of a word (well defined by the Unicode standard), then compute the decomposed form and finally remove all the diacritics. However, this is not enough. Normalization can not always be reduced to just a matter of removing marks. For example the standard German letter ß is widely used and replaced/understood as “ss” (you can enter ß in your favorite web search engine and you will discover that it also search for “ss“). The problem is that there is no decomposition for “ß” in the Unicode standard because this letter is not a letter with marks.
To solve that problem, we need to look in the Character Fallback Substitution table that is not part of most of Unicode library implementations. This substitution table defines that “ß” can be replaced by “ss,”. There are plenty of other examples; For instance, 0153(œ) and 00E6(æ), letters of the French language, can be replaced by “oe” and “ae”.
At the end, this led us to implement our own Unicode library to ensure that our isAlpha(unicodechar) and isNumber(unicodechar) methods have a unique behavior on all devices/languages and to implement a normalize(unicodestring) method that contains character fallback substitution table. By the way our implementation of normalization is far more efficient because we implemented it in one step instead of three (lowercase + decomposition + diacritics removal).
I hope you found this post useful and gained a better understanding of the Unicode standard and the limits of standard Unicode libraries. Feel free to contribute comments or ask for precisions.
Powered by Algolia Recommend