How to access Retry-After header in API response

Farrukh
Getting noticed

How to access Retry-After header in API response

Rate Limit

  • The Dashboard API is limited to 5 requests per second, per organization.
  • A burst of 5 additional calls are allowed in the first second, so a maximum of 15 calls in the first 2 seconds.
  • The rate limiting technique is based off of the token bucket model.
  • An error with a 429 status code will be returned when the rate limit has been exceeded.

RATE LIMIT ERRORS

​If the defined rate limit is exceeded, Dashboard API will reply with the 429 (rate limit exceeded) error code. This response will also return a Retry-After header indicating how long the client should wait before making a follow-up request.

  • The Retry-After key contains the number of seconds the client should delay. A simple example which minimizes rate limit errors:​

 

response = requests.request("GET", url, headers=headers)
​
if response.status_code == 200:
  # Success logic
elif response.status_code == 429:
  time.sleep(int(response.headers["Retry-After"]))
else:
  # Handle other response codes

 

  • Expect to backoff for 1 - 2 seconds if the limit has been exceeded. You may have to wait potentially longer if a large number of requests were made within this timeframe.

 

Source:

https://documenter.getpostman.com/view/7928889/SVmsVg6K?version=latest

 

Another thing you should consider is using action batches. That'll help to optimize your workflow, thus lowering the amount of calls needed. Check this blog post for all info on that:

https://meraki.cisco.com/blog/2019/06/action-batches-a-recipe-for-success/

View solution in original post

2 Replies 2
NolanHerring
Kind of a big deal

Thanks for this !

I usually default to having a 1 sec or 2 sec timer since I'm not really in a rush, but still want it to be efficient.
Nolan Herring | nolanwifi.com
TwitterLinkedIn
PhilipDAth
Kind of a big deal
Kind of a big deal

Wait - you've managed to get a Python script to make more than 5 requests a second?  I find that Python is so slow that this is usually difficult to achieve - especially if you are doing some processing of the returned data.

 

I saw quite a good library somewhere (I think it was node.js where this is a real issue because of its speed) that actually used a queue.  The library submitted your requests to a queue and then a seperate process de-queued the requests at a rate of 5 per second and then called you back with the return value.

Very nice.

 

And then someone else in your company starts running a script eating into your 5 API calls per second and you are screwed again.

Get notified when there are additional replies to this discussion.