Is there a rate limit?
Algolia will delay or reject indexing operations whenever a server starts to be overloaded. These measures are taken proactively to avoid downtime due to the overload. We call this action the rate limit.
The delay or rejection will continue until the server gets to a more manageable state.
The rate limit will not slow down or impact in any way search operations.
If an index operation is rejected, it will produce an error on the API call with a message specifying the exact reason (too many jobs, job queue too large, old jobs on the queue, disk almost full).
The following indexing operations are impacted: the
Algolia servers are designed to contain large amounts of data, and to perform fast indexing and searching operations. It is therefore difficult - but not impossible - to reach the natural limits of a server.
To avoid downtime or delay, every Algolia server has an internal “rate limit” mechanism that is designed to stop the server from being overloaded with too many costly indexing operations.
Overloading is reached when the server can no longer handle indexing operations within a reasonable time-frame. We monitor our servers avoid the following scenarios:
- A client’s overall application size (the sum total of all index sizes) becomes too large
- Old requests remain unprocessed, indicating a backlog of indexing requests
- The indexing queue has too many unprocessed requests, or the total size of all queued requests is too big
If any of these scenarios occur, we will start to slow down incoming indexing operations. We do this so the client will have to wait before sending new requests. This delay will be mostly transparent. However, if the server continues to be overloaded, the server will start to reject indexing requests as they come in, returning the above mentioned 429 error.