rate limiting not working

RaresPauna
Getting noticed

rate limiting not working

 

    async with meraki.aio.AsyncDashboardAPI(
    api_key='xyz',
    output_log=False,
    print_console=True,
    suppress_logging=True,
    wait_on_rate_limit=True,
    maximum_retries=100,
    maximum_concurrent_requests=5
   
) as aiomeraki:
        org_vlan_tasks = [asyncio.create_task(process_organization_vlans(aiomeraki,org,vlan_std_keys,severities)) for org in organizations]
        await asyncio.gather(*org_vlan_tasks)
    for network in organization['networks']:
        tasks.append(asyncio.create_task((process_organization_vlans(aiomeraki,network,vlan_std_keys,organization,severities))))

The smallest function in the chain, uses the initialized meraki.aio dashboard to get vlans. I get multiple 429 errors, and i don't understand why, because i used maximum_concurent_requests at half the threshold, and I am the only one doing api calls on this organization
 
6 Replies 6
RaresPauna
Getting noticed

Looking at logs, I see more than 5 calls pers second, I can't see where the issue is hiding

sungod
Kind of a big deal
Kind of a big deal

When I've tried maximum_concurent_requests it didn't seem to work.

 

Don't worry about 429s, use wait/retry

 

Of course if there are too many retries the Python library call will eventualy fail (with 429), so you just add an extra delay then try it again.

 

This works for me across many organizations, some with hundreds of networks and thousands of devices, i.e. a lot of concurrent calls, and of course a lot of 429s, which are not a problem as they are handled as above.

RaresPauna
Getting noticed

Thank you @sungod. Can you show me an example of o the wait/retry mechanism ?

sungod
Kind of a big deal
Kind of a big deal

It's built into the Meraki Python library, you can see it in the github repository.

 

If you mean the additional one I sometimes add to scripts that can generate lots of calls, just put the library call inside a loop, then use try/except to detect a 429 (i.e. the internal wait/retry in the library call has had too many retries), if there's a 429, wait a while, then do continue to restart the loop, if all went well, exit the loop.

 

I use an increasing delay inside the loop, ten seconds first time, increasing exponentially to about 5 minutes, after that give up and exit as there's either a fault or someone else is constantly hitting the API and using up the call budget.

RaresPauna
Getting noticed

There is one thing i don't understand. How do you use try/except to detect a 429 ? Because the

AsyncRestSession code for 429 is this: 
                elif status == 429:
                    wait = 0
                    if "Retry-After" in response.headers:
                        wait = int(response.headers["Retry-After"])
                    else:
                        wait = random.randint(1, self._nginx_429_retry_wait_time)
                    if self._logger:
                        self._logger.warning(
                            f"{tag}, {operation} > {abs_url} - {status} {reason}, retrying in {wait} seconds"
                        )
                    await asyncio.sleep(wait)
So it is not raising any exception, therefore you can't catch it in your functions.
sungod
Kind of a big deal
Kind of a big deal

This detects 429, it's from a 4-year old snippet of code, but it's still running ok...

        try:
            # get list of devices on network
            connectstats = await aiomeraki.wireless.getNetworkWirelessDevicesConnectionStats(net['id'], t0=starttime, t1=endtime, ssid=netssid['number'])
        except meraki.AsyncAPIError as e:
            if "429 Too Many Requests" in str(e):
                #print(f'stats had a 429', file=sys.stderr)
                time.sleep(10 * attempt)
                continue
            print(f'mrssids stats Meraki API error: {e}', file=sys.stderr)
            return
        except Exception as e:
            continue
#            print(f'mrssids stats some other error: {e}',

 

Get notified when there are additional replies to this discussion.