Rate Limits

Rate limiting is imposed on each available API endpoint and will return the appropriate HTTPS error codes (e.g., 429, 503, etc.) depending on the state of the system.

Two forms of rate limiting are currently supported:

  • Request rate limiting — Request rate limiting is the throttling of incoming requests on an API endpoint. When the number of requests per second limit is reached, the endpoint will start returning a 429 error code. The current rate limits for the endpoints are 20 requests per second, with bursts allowed up to 60 requests per second.
  • Concurrent rate limiting — Concurrent rate limiting is the throttling of incoming requests when the Coreapp itself is under heavy load (e.g., job queue size grew too large). At this point, the endpoint will start returning a 503 error.

When you hit either of these rate limiting scenarios, what you should do is halt any further requests and slow down the pace of API requests. If you start seeing many rate limit errors, it would be advisable to build a queue on your end to throttle the requests.

If your expected average send-rates are significantly higher and queuing is not an effective strategy, please look into our High Availability and Scaling options.