It's running across multiple organizations, some with hundreds of networks and thousands of devices.
Using the Python library and async IO (to do daily data collection), this can result in hundreds of API calls being made concurrently, rate limit is far exceeded, but with back off and retry it's not an issue.
To avoid beating on the service, the back off time curve increases exponentially. It doesn't matter if daily data collection takes an hour instead of a minute, it's a simple way to spread the load.
This is overlaid on the built-in retry within the Python library, as it allowed taking into account the context/volume associated with each call, sometimes the built-in retry wasn't coping, I chose to add more extreme back-off behaviour than just re-run calls with the native retry mechanism.
If you are trying to make real-time changes, one call at a time, rather than pull data, perhaps investigate a combination of self rate limiting and back-off/retry. But I would also look at seeing if using action batches is a better fit https://developer.cisco.com/meraki/api-v1/#!overview/action-batches