Data, Record Size, and Usage Limits
Records can’t go beyond a certain size limit. This limit might depend on your plan - see our pricing page for more details.
You’ll get the
Record is too big error if you try to index a record that exceeds the limit.
We recommend a few techniques to help you break up your records into smaller ones if needed.
You only need to worry about index size if your application runs on dedicated hardware (your plan has the Enterprise add-on). Though there is no strict upper limit to an index’s size, we recommend you keep indices smaller than 102GB.
It represents 80% of the RAM capacity (128 GB) of dedicated servers, which leaves 20% of the RAM capacity to handle your indexing tasks. If the index size exceeds the 128 GB capacity, performance degrades severely: data swaps back and forth between temporary and permanent memory, which is a costly operation.
There is no limit on the number of records an index can have, only on the memory capacity of the hardware.
Indexing usage limits
Maximum indexing operations
Regarding pricing, we count the number of operations performed every month. If you hit the limit of your plan, we’ll charge you a fee for the extra operations you’ve performed, based on the over-quota pricing of your current plan.
Indexing rate limit
Algolia delays or rejects indexing operations whenever a server is overloaded. If Algolia determines that indexing operations can negatively impact search requests, it takes action to favor search over indexing. We call this the rate limit, which exists to protect the server’s search capacity.
We count a search operation whenever you perform a search. In autocomplete and search-as-you-type implementations, this happens on every keystroke. If you’re querying several indices at each keystroke, then one keystroke triggers as many operations as queried indices, unless you use the
multipleQueries method to do this.