Crossing API Rate Limit & charges

SOLVED
FlyingFrames
Getting noticed

Crossing API Rate Limit & charges

If the API calls happen to be more than the free tier of 10 calls per second per org, is there a metering & charge possible? 

 

https://developer.cisco.com/meraki/api-v1/#!rate-limit

 

  • Each Meraki organization has a call budget of 10 requests per second.
  • A burst of 10 additional requests are allowed in the first second, so a maximum of 30 requests in the first 2 seconds
1 ACCEPTED SOLUTION
sungod
Head in the Cloud

As above, there's no charge.

 

What happens is that 'excess' calls will return code 429 to indicate that the call failed due to the rate limit.

 

Any application that might make calls frequent enough to hit the rate limit needs to be coded to back-off and retry appropriately.

 

If you use the Python Meraki library, it includes handling of rate limiting, though it's not always sufficient (i.e. when using the aio library).

 

View solution in original post

4 REPLIES 4
Bruce
Kind of a big deal

@FlyingFrames, no there isn't (well, not that I've ever discovered). Everyone is treated equal with regards to API calls - one free tier for everyone.

sungod
Head in the Cloud

As above, there's no charge.

 

What happens is that 'excess' calls will return code 429 to indicate that the call failed due to the rate limit.

 

Any application that might make calls frequent enough to hit the rate limit needs to be coded to back-off and retry appropriately.

 

If you use the Python Meraki library, it includes handling of rate limiting, though it's not always sufficient (i.e. when using the aio library).

 

PhilipDAth
Kind of a big deal
Kind of a big deal

The aio portion of the Python library is aware of 429's and will back off and retry automatically.

sungod
Head in the Cloud

I've found it still breaks down under load, such as when nested to get per device/other element data for hundreds of networks in 'one' aio call, then calls will intermittently fail with the error "429 Too Many Requests".

 

To handle that, inside the sub-functions there's a capped exponential backoff and retry if a failure occurs.

 

This issue certainly was in v0 and beta/early v1, perhaps the library code improved since then? But I've not tried running without the extra error recovery as it also helps gracefully recover from other transient errors that can occur.

 

This is used in scripts that daily pull various performance data for analysis for large organizations that we manage, it's important that it operates without human intervention to recover/re-run. Any time there's a 'new' error found, I try to add code to avoid/recover.

 

Get notified when there are additional replies to this discussion.