- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Efficiency of loading response in a web page
Hello guys. I am trying to make a web application and i need to display each template from an organization / the networks that are not bounded to a template. I developed this script :
Solved! Go to solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello everybody. I found the solution to my problem. I used asyncio Semaphore to limit on the number of threads.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you are pulling large amounts of data/making many API calls to render a web page, you will always have too much delay, especially once you factor in rate limiting, need for retries etc.
Does the web page really need to reflect real time info on that scale?
I'd pull the data once a day/whatever, using asyncio to minimise update time, cache it locally, use that data for the web page.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks a lot for the answer. You are right. I don't nteed real time info, it it fetches it once a day it will be perfect. Can I do that using asyncio ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, if you look on github there's an example you could use as a reference to get started...
https://github.com/meraki/dashboard-api-python/blob/main/examples/aio_org_wide_clients_v1.py
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
One more question. The project is made with flask, so i don 't have a main function. Where should i run these ? " loop = asyncio.get_event_loop() loop.run_until_complete(main())"
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I thought Flask was aimed at web applications, but for the run daily and cache I'd just run as a scheduled task unrelated to Flask, and save the data where the web application(s) can grab it on-demand.
If you saved it in a SQLite database, you could access that from Flask... https://flask.palletsprojects.com/en/2.2.x/tutorial/database/
I run scheduled data gathering via cron (cron runs a script that runs the python and manages errors, retries etc.), if you're in a unix/linux environment that'd be an option for you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Google found this answer:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you look at the GitHub site for the Meraki library you will see there are several parameters you can set. Just increase the parameter for number of retries.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Honestly i can't find a script where they use that setting
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The library defaults are set here, you can see there are many settings...
https://github.com/meraki/dashboard-api-python/blob/main/meraki/config.py
An example usage adapted from one of my scripts...
async def main():
async with meraki.aio.AsyncDashboardAPI(
api_key=API_KEY,
base_url='https://api.meraki.com/api/v1/',
print_console=False,
output_log=False,
suppress_logging=True,
wait_on_rate_limit=True, ##### do wait if rate limited
maximum_retries=100 ##### set the retry limit
) as aiomeraki:
#the settings above will apply to all API calls made in this context
#for instance (error handling removed for clarity
networks = await aiomeraki.organizations.getOrganizationNetworks(ORG_ID, perPage=1000, total_pages="all")
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The example I gave is running as a Python script.
If you are trying to do it within Flask I'd guess you are losing context, hence errors.
I strongly recommend you first make things work as a Python script that gets the data you want.
Once that works, if you still want to do it within Flask, you at least have a known good start point and can be sure any issues are due to how things work in Flask.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The errors are not related to what i implemented from you. They are related with exceeding the rate-limit i think. They were there before implementing what you told me
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
raise AsyncAPIError(
meraki.exceptions.AsyncAPIError: organizations, getOrganizationConfigTemplate - 429 Too Many Requests, Reached retry limit: None. This is the main error that i have to solve, but nothing seems to work
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you are getting this error then I would assume the API call is not within the scope where you set the rate limit to 100.
It is not easy to read what you paste because you are pasting it as text not as code, and then adding more text.
Some of the API calls I see you using can paginate, but you are not setting perPage, if you always set the perPage to the maximum allowed for the call it can reduce number of calls needed, and thereby reduce rate limiting.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I never said it did.
I made the general comment that "Some of the API calls I see you using can paginate...".
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
async def get_organizations():
async with meraki.aio.AsyncDashboardAPI(
api_key,
base_url="https://api.meraki.com/api/v1",
output_log = False,
print_console = False,
suppress_logging=True,
wait_on_rate_limit=True, ##### do wait if rate limited
maximum_retries=100 ##### set the retry limit
) as aiomeraki:
organizations= await aiomeraki.organizations.getOrganizations()
return organizations
#################################################################################### FUNCTIONS ####################################
async def get_template_name(org_id,config_template_id):
async with meraki.aio.AsyncDashboardAPI(
api_key,
base_url="https://api.meraki.com/api/v1",
output_log = False,
print_console = False,
suppress_logging=True,
wait_on_rate_limit=True, ##### do wait if rate limited
maximum_retries=100 ##### set the retry limit
) as aiomeraki:
template=await aiomeraki.organizations.getOrganizationConfigTemplate(
org_id, config_template_id
)
template_name=template['name']
return template_name
async def append_template(network,org_id):
template_list=[]
if network['isBoundToConfigTemplate']==True:
template_name= await get_template_name(org_id,network['configTemplateId'])
template= {"id" :network['configTemplateId'], "name" :"Template - "+ template_name }
if template not in template_list:
template_list.append(template)
elif network['isBoundToConfigTemplate']==False:
network={"id":network['id'],"name":"Network - " +network['name']}
if network not in template_list:
template_list.append(network)
await asyncio.sleep(2)
return template_list
#################################################################################### ROUTES #######################################
@app.route('/favicon.ico')
async def favicon():
return "Da"
@app.route("/")
async def render_organizations():
organizations=await get_organizations()
return render_template("organizations.html",organizations=organizations)
@app.route('/<org_id>')
async def organization_detail(org_id):
organization_id = org_id
async with meraki.aio.AsyncDashboardAPI(
api_key,
base_url="https://api.meraki.com/api/v1",
output_log=False,
print_console=False,
suppress_logging=True,
wait_on_rate_limit=True, ##### do wait if rate limited
maximum_retries=100 ##### set the retry limit
) as aiomeraki:
try:
networks = await aiomeraki.organizations.getOrganizationNetworks(
organization_id, total_pages="all"
)
except OSError as e:
print(e)
template_list = []
# create a list of all network template tasks
network_template_tasks = [append_template(net, org_id) for net in networks]
# iterate over the completed tasks as they finish
for coro in asyncio.as_completed(network_template_tasks):
# await asyncio.sleep(2)
template_list.extend(await coro)
return render_template("templates.html", templates=template_list)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Just a thought, I only added the #### comments to point out the parameters to you, they aren't in my scripts.
If you remove them, does it change behaviour? maybe parser stops processing the list when it hits the first comment...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The behavior is the same. what i observed is that i no longer get the error " 429 too many request" , but , there still are to many requests because if i make a random call on that specific organization in postman i get that there are to many api calls. So idk what to do anymore honestly
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
and the loading of the page is infinite. So it is stuck somewhere the api request
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you call postman with the same org ID and API key that is currently in use by a script, the same rate limit covers both.
We have scripts built with the Meraki Python library accessing multiple orgs making hundreds of concurrent calls via asyncio, the retry mechanism is effective, our systems do this every day without running out of retries, I can't remember the last time we had one exit with a 429.
Have you tried as a simple Python script? i.e. not with Flask.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you sungod for your multiple answers to the topic. Unfortunately i have the same problem even with a simple python script. For me it makes sense. Concurrent calls go above 10 request per second, which is the limit from Meraki. I can't identify any mistake in my script that would do that
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
That is very strange. It might be worth trying to run a script from a different system just to check there is not something broken in the one you are using.
The rate limiting/retry really does work. Pretty much every script in our analytics platform relies on this working flawlessly, day after day, month after month, year after year.
Script below is a simple example, you can use as-is if you have some APs, or adapt it.
I just ran it, there are over 1700 APs in the target org, these are all called 'at once' via aio, but the rate-limiting kicks in to save things. It takes a bit less than three minutes to complete, so the effective call rate is just under 10 calls/second, that is the wait/retry in action.
If I change retries to maximum of 1, it will fail in just a few seconds.
If you do not have lots of APs, just replace the call used to another one that will target something you have many of.
import os
import sys
import meraki.aio
import asyncio
#import the org id and api key from the environment
#or you could hard code them, but that's less desirable
ORG_ID = os.environ.get("PARA0")
API_KEY = os.environ.get("PARA1")
async def processAp(aiomeraki: meraki.aio.AsyncDashboardAPI, ap):
try:
# get list of statuses for an AP
statuses = await aiomeraki.wireless.getDeviceWirelessStatus(ap['serial'])
except meraki.AsyncAPIError as e:
print(f'Meraki API error: {e}', file=sys.stderr)
sys.exit(0)
except Exception as e:
print(f'some other error: {e}', file=sys.stderr)
sys.exit(0)
for bss in statuses['basicServiceSets']:
if bss['enabled']:
print(f"{ap['name']},{bss['ssidName']},{bss['bssid']},{bss['band']}")
return
async def main():
async with meraki.aio.AsyncDashboardAPI(
api_key=API_KEY,
base_url='https://api.meraki.com/api/v1/',
print_console=False,
output_log=False,
suppress_logging=True,
wait_on_rate_limit=True,
maximum_retries=100
) as aiomeraki:
#get the wireless devices
try:
aps = await aiomeraki.organizations.getOrganizationDevices(ORG_ID, perPage=1000, total_pages="all", productTypes = ["wireless"])
except meraki.AsyncAPIError as e:
print(f'Meraki API error: {e}', file=sys.stderr)
sys.exit(0)
except Exception as e:
print(f'some other error: {e}', file=sys.stderr)
sys.exit(0)
# process devices concurrently
apTasks = [processAp(aiomeraki, ap) for ap in aps]
for task in asyncio.as_completed(apTasks):
await task
if __name__ == '__main__':
asyncio.run(main())
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello everybody. I found the solution to my problem. I used asyncio Semaphore to limit on the number of threads.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
And immediatly after i start the localhost, this traceback comes in console. [2023-03-21 15:46:09,117] ERROR in app: Exception on /favicon.ico [GET]
Traceback (most recent call last):
File "C:\Users\rares.pauna\Desktop\SSID-app\.venv\Lib\site-packages\flask\app.py", line 2528, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rares.pauna\Desktop\SSID-app\.venv\Lib\site-packages\flask\app.py", line 1825, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rares.pauna\Desktop\SSID-app\.venv\Lib\site-packages\flask\app.py", line 1823, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rares.pauna\Desktop\SSID-app\.venv\Lib\site-packages\flask\app.py", line 1799, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rares.pauna\Desktop\SSID-app\.venv\Lib\site-packages\asgiref\sync.py", line 240, in __call__
return call_result.result()
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rares.pauna\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\rares.pauna\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "C:\Users\rares.pauna\Desktop\SSID-app\.venv\Lib\site-packages\asgiref\sync.py", line 306, in main_wrap
result = await self.awaitable(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rares.pauna\Desktop\SSID-app\app.py", line 92, in organization_detail
networks = await aiomeraki.organizations.getOrganizationNetworks(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
At a minimum change to asyncio and incrementally display the information as you load it - but I would be using the method described by @sungod.
Search for "asyncio" at this page to get some initial info: