I have been doing some scripting and using the meraki.py which is fine except I have a small issue.
When I make an API call (via meraki.py) it would be useful to be able to evaluate any status codes especially errors and I'm putting 429 (Rate Limit) errors at the top of the list.
This ability may already be in there but I am not a python person and I'm missing the obvious and at the moment I can only check for an empty return value (which is not great.)
I know that I could roll my own API requests via the requests library (and have easy access to the status codes) but I dislike reinventing the wheel.
Recently I have been hit with a lot of Rate Limit issues when other teams run less well behaved scripts anywhere within the org <sigh>
As these scripts can take over an hour to run through a particular organisation (over 1800 networks) I have made sure that I am well behaved regarding call limits BUT during these long periods I am at the mercy of other teams who are not so considerate and I have had a 90 minute run terminated during the last few sites.
Just a quick thought on that rate limiting. Is anyone leveraging action batches?
>Just a quick thought on that rate limiting. Is anyone leveraging action batches?
With regard to the request limit; I ocassioanlly insert a sleep timer between function calls to ensure I can't get over the rate limit.
Personally - I find it very hard to get to 5 requests per second to the Meraki Dashboard with a Python script.
Agreed. In all of the scripts I have written to do a number of different things, I don't think I have ever exceeded the rate limit.
@PhilipDAth Hah, me either, but I don't know what other people need. If someone's got a job that takes 90 min and can be canceled due to other people's use rates, it might be a candidate for batching.
I had a think and I have a solution that works for me and there would be a method of adding it to meraki.py without breaking the wider worlds script library.
As python can easily return multiple values from a subroutine one can modify calls with meraki.py
from "return result" to return result,dashboard.status_code
So a call to get_device_statuses() would become:
statusList,callError = meraki.get_device_statuses(my_key, my_org, True)
With the http response code being found in callError (ie 200 for a successful call)
I have tested the additional status code value with a modified meraki.py library and it works well.
For wider usage (i.e. not just me) the breaking existing scripts could be avoided by an implicit parameter could be used similar to how suppress_print operates
With an implicit False in there the function will return only the traditional single variable
An additional True parameter would return the optional status_code value
meraki.get_device_statuses(my_key, my_org) # Returns regular data without suppressing the text output
meraki.get_device_statuses(my_key, my_org, True) # Suppresses the text output and returns traditional values
meraki.get_device_statuses(my_key, my_org, True, True) # Suppresses text output and returns both traditional data and the HTTP status code
I'm unable to exceed the 5 request limit, but I suspect it is a combination of the monitoring team running a script and a provisioning team running multiple instances of a script to make changes.
I get very little time to do any scripting (despite people wanting the output from them) so having to start recreate bits of meraki.py will be really irritating and get in the way of what I'm supposed to be doing of a day 😞
That's what I thought would be happening too. That's why I wonder if your organization could come together and find ways to implement action batches where possible. I know it takes additional time, and that time is hard to find upfront, but you can't be the only person having rate limiting problems.
I agree that it would be nice if meraki.py could be extended as you describe, but I suspect that the SDK is how they're moving forward on providing ready-made Dashboard API access. Just a gut feeling. I can't confirm that, but maybe @MeredithW or @CarolineS could find out for us?
It seems like returning the result code as well as the JSON data (if any) would be the standard definition for these functions. It gives you so much more flexibility as well as some kind of indication of what went wrong, if there was an error. If I was designing this SDK, I would return the data and the result code. Even if most people never used it, it would be there for those that needed it.
Issues like this are one of the reasons I do not use the Python SDK. I have my own error checking/handling function that I use in my scripts and it allows me to capture and print out errors that I encounter and in the case of a few scripts, to trap certain errors for special handing. Now, this comes at the price of having to write my own functions to make the API calls via the requests library, but I started out that way, so I don't even think about it now.
I also like the ability to have much greater control over the data and much greater access. This came in handy when dealing with API calls that require you to specify the "perPage" parameter. How do you get the additional data if it goes beyond the page size? Well, you can get the information you need to specify a "startingAfter" value in additional API calls by looking in the "Link" field of the HTTP header. That whole header as well as the JSON data that you requested in the API call is part of the returned data from the requests library. Now, the SDK may pull all that data together and give it to you in a neat package when you make a call, but if not, I'm not sure how you would deal with getting the additional data.
I'm honestly with @CBurkhead here, by the way. I started writing scripts before the SDK came out, so I'm a) very comfortable with the requests library at this point and b) prefer to explicitly specify how I do my errors in a lot of cases. I'm not really re-inventing the wheel that much when I'm just copying-pasting a standard call, changing the parameters, and updating the URL.
I may be more comfortable with Python tho. I studied a heap of programming stuff before going net eng.
Exactly. My error handling function is something I copy into a new script and I almost never have to change it. I have standard functions for getting the organization list, network list, devices in a network, etc. If I need to use a new endpoint, most of the time I just have to copy an existing function, change the endpoint URL, change the error message that gets printed if one occurs, and change my comments before the function describing what it does.
I am probably more comfortable in Python than many, also. I have been using the API to learn Python, but I have previous coding experience in BASIC, Pascal, C, Perl, and a few other specialized languages.