I need some help resolving "429 Too many requests" error while fetching values from Meraki API. I know there is an API rate limit of 10 calls per sec per organization. But we have multiple environments for our product lifecycles that compound the issue. Also tried increasing the no of retries and waiting time but no luck. Are there some other ways to resolve this?
We are looking for the following options-
1. Can we call the "lldpCdp" value for all devices by just one API call? Right now can only fetch the value for a single device using its serial number.
2. Can we use Webhooks to resolve this throttling behavior? If so, what's the procedure?
3. Any other way...
A piece of advice would be highly appreciated. Thanks in advance.
Solved! Go to Solution.
Are you using the Meraki Python library? It will automatically rate limit and retry 429 requests.
Otherwise you'll need to look at rate limiting your requests.
Thanks for the reply. Unfortunately, we are not using Python library, but Meraki Dashboard API latest version. Can webhooks resolve this throttling issue? or can we call the "lldpCdp" value for all devices by just one API call?
The rate limit is set by Meraki, you cannot change it.
As @PhilipDAth says, you need add code to detect, back-off and retry in the event of 429 responses.
I add extra back-off and retry on pretty much every API call that fetches data, it's an extremely effective method.
Thanks for the response. Will this retry method work across 4 different environments (Dev, Test, UAT and Prod) all talking to the same API and require to be completed in a time-sensitive manner?
It's running across multiple organizations, some with hundreds of networks and thousands of devices.
Using the Python library and async IO (to do daily data collection), this can result in hundreds of API calls being made concurrently, rate limit is far exceeded, but with back off and retry it's not an issue.
To avoid beating on the service, the back off time curve increases exponentially. It doesn't matter if daily data collection takes an hour instead of a minute, it's a simple way to spread the load.
This is overlaid on the built-in retry within the Python library, as it allowed taking into account the context/volume associated with each call, sometimes the built-in retry wasn't coping, I chose to add more extreme back-off behaviour than just re-run calls with the native retry mechanism.
If you are trying to make real-time changes, one call at a time, rather than pull data, perhaps investigate a combination of self rate limiting and back-off/retry. But I would also look at seeing if using action batches is a better fit https://developer.cisco.com/meraki/api-v1/#!overview/action-batches
Thanks for the elaborative reply. Unfortunately, it's not possible for us to use Python library in our project. However, we are investigating action batches. Can we use this to fetch Lldp Cdp information of several devices simultaneously using Dashboard API https://developer.cisco.com/meraki/api-v1/#!get-device-lldp-cdp?
You don't have to use the Python library, you can write the back-off and retry in whichever language you are using.
Action batches are for add/update/delete operations, not query.
Are you obeying the Retry-After time in the 429 response in your current code?
It's for write operations, the list of things you can do is here...