Tools / Crawler / Extracting data

Crawler data extraction overview

The Crawler processes pages as follows:

  1. Page is fetched.
  2. Links and records are extracted from the page.
  3. Extracted records are sent to Algolia.
  4. Extracted links are added to your crawler’s URL database.

The process repeats until all the required pages have been extracted.

The crawler URL database

When a crawl starts, your crawler adds all the URLs in the following parameters to its URL database:

For each of these pages, your crawler fetches linked pages. It looks for links in any of the following formats:

  • head > link[rel=alternate]
  • a[href]
  • iframe[src]
  • area[href]
  • head > link[rel=canonical]
  • Redirect target (when HTTP code is 301 or 302)

You can specify that some links should be ignored.

The record extractor

The recordExtractor parameter takes a site’s metadata and HTML and returns an array of JSON objects. For example:

1
2
3
4
5
6
7
8
9
10
recordExtractor: ({ url, $, contentLength, fileType }) => {
     return [
          {
              url: url.href,
              title: $("head > title").text(),
              description: $("meta[name=description]").attr("content"),
              type: $('meta[property="og:type"]').attr("content"),
          }
     ];
}

recordExtractor properties

This function receives an object with several properties:

url, fileType, and contentLength provide useful metadata on the page you are crawling. However, to extract content from your pages, you must use the Cheerio instance ($).

recordExtractor return structure

The JSON objects returned by your recordExtractor are directly converted into records in your Algolia index.

They can contain any type as long as they’re compatible with an Algolia record.

However:

  • Each record must be less than 500 KB
  • You can return a maximum of 200 records per crawled URL.

Extract from JavaScript-based sites

You can use your crawler on JavaScript-based sites. To do this, set renderJavaScript to true in your crawler’s configuration.

Since setting renderJavaScript to true slows the crawling process, you can use it for only a subset of your site.

Further reading

Did you find this page helpful?