Co-authored by Ian Gil Ragudo
APIs are the backbone of modern software development, and with their increasing reliance, it’s more important than ever to implement proper rate limiting techniques. Rate limiting is a crucial technique for managing the flow of requests to an API, preventing providers from being overwhelmed with traffic. By setting limits on the number of requests that can be made within a certain time frame, rate limiting helps ensure that all users have fair and reliable access to the API’s resources.
This post will focus on Atlassian app rate limit handling.
What happens when rate limiting is not handled?
Jira and Confluence Cloud limit the number of REST API requests to maintain service reliability and responsiveness for their users. This is done to ensure that apps can rely on these APIs without experiencing any issues related to performance or stability. Apps can detect rate limit responses by checking if the HTTP response status code is 5xx
accompanied by a Retry-After
header or an HTTP status code of 429
.
When an app fails to handle rate limit responses properly, it may attempt to retry the API call with little or no backoff strategies. This can cause the API call to be rate limited repeatedly and result in a cycle of retries until the maximum retry count is reached. Ultimately, this issue may result in the application failing to load entirely, leading to a poor experience for your users. Reliability is extremely important to Atlassian customers, and errors can result in increased support volume or poor sales. Therefore, it’s important for developers to implement effective handling of rate limit responses in order to avoid such issues.
Will my app experience rate limiting?
Any app built on the Atlassian platform that is calling REST APIs can experience rate limiting. With that in mind, it is best to be aware of where these REST API invocations are and handle them accordingly. The APIs can be invoked from different contexts:
- Frontend UI context: A user interaction (e.g., button press) synchronously triggers a REST API call
- Backend product event context: A product event (e.g., issue created event, or page created event) signals a REST API endpoint to be called; normally used for automation
- Backend external event context: A third-party app or service event is received by your app
- Backend processing context: A scheduled background job that triggers a call to an endpoint
Depending on the context, you may apply one or more of the following strategies found in the next section in order to gracefully handle rate limiting.
How to handle rate limiting?
By implementing best practices for efficient API usage, implementing caching, handling rate limit errors gracefully, and subscribing to product events rather than polling, developers can build apps that are able to handle rate limiting effectively and minimize the risk of the above-mentioned issues.
Here are some examples of how to handle rate limit responses:
- Utilizing the
Retry-After
header: When the response contains aRetry-After
header, use the value as the delay before sending the retry request. Here’s a pseudocode on how it can be handled.
let response = await fetch(request);
let retryDelayMillis = -1;
if (response.isOk()) {
handleSuccess();
} else if (response.hasHeader('Retry-After')) {
retryDelayMillis = 1000 * response.headerValue('Retry-After');
delay(retryDelayMillis);
retryRequest(request);
}
- Use a backoff algorithm in the absence of
Retry-After
: Some of the rate limit responses do not contain the recommended backoff value found inRetry-After
. In such cases, there are different algorithms that can be used to compute for a retry backoff delay. The pseudocode below shows the use of the exponential backoff strategy with jitter.
let response = await fetch(request);
let retryDelayMillis = -1;
if (response.isOk()) {
handleSuccess();
} else if (response.hasHeader('Retry-After')) {
retryDelayMillis = 1000 * response.headerValue('Retry-After');
delay(retryDelayMillis);
retryRequest(request);
} else if (!response.hasHeader('Retry-After') && response.statusCode == '429') {
let jitterInMillis = generateRandomInteger(1000, 10000);
// retryCount is stored across requests as part of the retryData and can be
// used to exponentially increase the back-off time across subsequent rate limiting responses
let retryCount = response.retryData.retryCount;
retryDelayMillis = (1000 * (2^retryCount)) + jitterInMillis;
delay(retryDelayMillis);
retryCount++;
retryRequest(request, retryCount);
}
- Disable the retry button: Some apps can trigger API calls via UI interaction e.g., a button press. In such cases, you can disable the UI component that triggers a request to retry combined with an information text detailing that the user can manually retry after x amount of time has elapsed; where x is the delay based on the
Retry-After
response header or from the backoff algorithm.
If you want to learn more about how to handle rate limits in Jira and Confluence APIs, you can find more information in Atlassian’s documentation. If you want to dig deeper and see how rate limiting can be handled using JavaScript, this blog https://blog.developer.atlassian.com/handling-rate-limiting-in-javascript/ is a great resource.