Every day, the Singular API receives millions of requests. To manage the volume of requests, limits are placed on the number of requests that one account can make. The Singular API also employs several safeguards against bursts of incoming traffic. If you send a lot of requests in quick succession or if the data you send exceeds your account limit, you might receive error responses that show up as status code 429. These limits help to provide a reliable, scalable API.
The API has several limiters, including:
- Daily limiters, which limit the number of requests and the amount of data received by the API per day.
- Burst rate limiters, which limit the number of requests and the amount of data received by the API per minute.
The daily and burst rate limiters depend on your subscription type.
- POST / PATCH: https://app.singular.live/apiv2/controlapps/:appToken/control
- POST / PATCH: https://app.singular.live/apiv2/controlapps/:appToken/command
Requests to endpoints that use rate limits return custom headers response codes. The response code is a JSON string with detailed information about daily limits and burst limits.
Rate limits can occur under a variety of conditions, but it’s most common in these scenarios:
- Running a large volume of closely-spaced requests can lead to the burst rate limiting. This often happens when developing or testing a data feed integration. When engaging in these activities, you should try to control the client side's request rate and data volume (see Handling Rate Limit Violations).
- Sending huge amounts of data to the API can lead to the burst rate limiting. This happens when sending growing statistics data over a period of time. In such a case, you should control the data volume on the client-side and only send the required dynamic data to the API.
Professional and enterprise subscriptions: Rate limit violations are logged, but API requests that exceed the limits are still processed.
A basic technique for integrations to handle rate limiting is to watch for
429 status codesand build in a retry mechanism. Your retry mechanism should follow an exponential backoff schedule to reduce request volume or increase the time between requests when necessary. Ideally, the client-side is self-aware of existing rate limits and can pause requests until the currently exceeded window expires.
We also recommend looking into the following techniques for handling limiting smoothly: