Algolia’s DNA is really about performance. We want our search engine to answer relevant results as fast as possible.
- not following the error-first or callback-last conventions;
- inconsistent API between the Node.js and the browser implementations;
- no Promise support;
- Node.js module named algolia-search, browser module named algoliasearch;
- cannot use the same module in Node.js or the browser (obviously);
- browser module could not be used with browserify or webpack. It was exporting multiple properties directly in the window object.
The backend and frontend share the same code.
Here are the main features of this new API client:
- works in Node.js 0.10, 0.12, iojs and from Internet Explorer 8 up to modern browsers;
- has a Promise + callback API;
- is available at npmjs.com/algoliasearch and on cdn.jsdelivr.net;
- has builds for jQuery, AngularJS and Parse.com;
- is compatible with all module loaders like browserify or webpack;
- is fully tested in all supported environments.
Challenge #1: testing
Before being able to merge the Node.js and browser module, we had to remember how the current code is working. An easy way to understand what a code is doing is to read the tests. Unfortunately, in the previous version of the library, we had only one test. One test was not enough to rewrite our library. Let’s go testing!
When no tests are written on a library of ~1500+ LOC, what are the tests you should write first?
Unit testing would be too close to the implementation. As we are going to rewrite a lot of code later on, we better not go too far on this road right now.
- initialize the library with
- browser issue an HTTP request
From a testing point of view, this can be summarized as:
- input: method call
- output: HTTP request
Indeed, having to reach Algolia servers in each test would introduce a shared testing state amongst developers and continuous integration. It would also have a slow TDD feedback because of heavy network usage.
This is not unit testing nor integration testing, but in between. We also planned in the coming weeks on doing a separate full integration testing suite that will go from the browser to our servers.
faux-jax to the rescue
Two serious candidates showed up to help in testing HTTP request based libraries
- use XMLHttpRequest for browsers supporting CORS,
- or use XDomainRequest for IE < 10,
- or use JSONP in situations where none of the preceding is available.
This seems complex but we really want to be available and compatible with every browser environment.
- Nock works by mocking calls to the Node.js http module, but we directly use the XMLHttpRequest object.
- Sinon.js was doing a good job but was lacking some XDomainRequest feature detections. Also it was really tied to Sinon.js.
As a result, we created algolia/faux-jax. It is now pretty stable and can mock XMLHttpRequest, XDomainRequest and even http module from Node.js. It means faux-jax is an isomorphic HTTP mock testing tool. It was not designed to be isomorphic. It was easy to add the Node.js support thanks to moll/node-mitm.
The testing stack is composed of:
- substack/tape, isomorphic testing and assertion framework
- defunctzombie/zuul, local and continuous integration test runner
- algolia/faux-jax, isomorphic HTTP mocking library
The fun part is done, now onto the tedious one: writing tests.
Spliting tests cases
We divided our tests in two categories:
- simple test cases: check that an API command will generate the corresponding HTTP call
- advanced tests: timeouts, keep-alive, JSONP, request strategy, DNS fallback, ..
Simple test cases
Simple test cases were written as table driven tests:
Creating a testing stack that understands theses test-cases was some work. But the reward was worth it: the TDD feedback loop is great. Adding a new feature is easy: fire editor, add test, implement annnnnd done.
Complex test cases like JSONP fallback, timeouts and errors, were handled in separate, more advanced tests:
To be able to run our tests we chose defunctzombie/zuul.
For local development, we have an npm test task that will:
You can see the task in the package.json. Once run it looks like this:
But phantomjs is no real browser so it should not be the only answer to “Is my module working in browsers?”. To solve this, we have an npm run dev task that will expose our tests in a simple web server accessible by any browser:
Finally, if you have virtual machines, you can test in any browser you want, all locally:
What comes next after setting up a good local development workflow? Continuous integration setup!
defunctzombie/zuul supports running tests using Saucelabs browsers. Saucelabs provides browsers as a service (manual testing or Selenium automation). It also has a nice OSS plan called Opensauce. We patched our .zuul.yml configuration file to specify what browsers we want to test. You can find all the details in zuul’s wiki.
Right now tests are taking a bit too long so we will soon split them between desktop and mobile.
Challenge #2: redesign and rewrite
Once we had a usable testing stack, we started our rewrite, the V3 milestone on Github.
We dropped the new AlgoliaSearch() usage in favor of just algoliasearch(). It allows us to hide implementation details to our API users.
new AlgoliaSearch(applicationID, apiKey, opts);
algoliasearch(applicationID, apiKey, opts);
client.method(param, callback, param, param);
client.method(params, param, param, params, callback);
This allows our callback lovers to use libraries like caolan/async very easily.
Promises and callbacks support
Promises are a great way to handle the asynchronous flow of your application.
— pouchdb (@pouchdb) March 10, 2015
We implemented both promises and callbacks, it was nearly a no-brainer. In every command, if you do not provide a callback, you get a Promise. We use native promises in compatible environments and jakearchibald/es6-promise as a polyfill.
The main library was also previously exporting window.AlgoliaSearchHelper to ease the development of awesome search UIs. We externalized this project and it now has now has a new home at algolia/algoliasearch-helper-js.
The previous version was directly exporting multiple properties in the window object. As we wanted our new library to be easily compatible with a broad range of module loaders, we made it UMD compatible. It means our library can be used:
- with a simple <script>, it will export algoliasearch in the window object
- using browserify, webpack, requirejs: any module loader
- in Node.js
This was achieved by writing our code in a CommonJS style and then use the standalone build feature of browserify.
- jQuery build using jQuery.ajax and returns jQuery promises
- AngularJS build is using $http service and returns AngularJS promises
- Parse.com build is using parse cloud http and promises
- Node.js, iojs are using the http module and native Promises
Every build then need to define how to:
- do http request through AlgoliaSearch.prototype._request
- return promises with AlgoliaSearch.prototype._promise
- use a request fallback where needed with AlgoliaSearch.prototype._request.fallback
Using a simple inheritance pattern we were able to solve a great challenge.
Finally, we have a build script that will generate all the needed files for each environment.
Challenge #3: backward compatibility
But we also wanted to provide a good experience for our previous users when they wanted to upgrade:
- we re-exported previous constructors like window.AlgoliaSearch*. But we now throw if it’s used
- we used npm deprecate on our previous Node.js module to inform our current user base that we moved to a new client
- we created legacy branches so that we can continue to push critical updates to previous versions when needed
Make it isomorphic!
Having separated the builds implementation helped us a lot, because the Node.js build is a regular build only using the http module from Node.js.
Then we only had to tell module loaders to load index.js on the server and src/browser/.. in browsers.
This last step was done by configuring browserify in our package.json:
If you are using the algoliasearch module with browserify or webpack, you will get our browser implementation automatically.
The faux-jax library is released under MIT like all our open source projects. Any feedback or improvement idea are welcome, we are dedicated to make our JS client your best friend 🙂