How can I specify a time range using GET/networks/{networkId}/clients or get more than 1000?

Solved
joincidence
Getting noticed

How can I specify a time range using GET/networks/{networkId}/clients or get more than 1000?

Max clients per page is 1000, and I have more than 1000 entries I need to retrieve. I can't seem to figure out the parameters to use for specifying a time range, so that I can make multiple calls with different time ranges, and get all of the data. Any ideas on how to do that or another way to retrieve all of the data? 

1 Accepted Solution
CBurkhead
Building a reputation

What @joincidence is talking about is the perPage value you need to specify to get the client list with this endpoint. It indicates how many results will be on a "page". For some reason, Meraki assumes that data from this endpoint will be displayed on a webpage and you will move through it using "Prev" and "Next" links. Calling the endpoint gives you the first page of data. In this case the first 1000 clients. The question is how do you get the rest of the data?

 

I had this same problem and this is what I found. Now, this is using Python, but the data should still be in the same place regardless of the language. In the information that is returned by the requests.request or requests.get is a field called "headers". Inside of this is a another field called "Link" (headers['Link']) that contains all of the information for the "Prev" and "Next" links. In this data is the token values you need to move back and forth through the data. By getting the token (this can be a device ID, timestamp, or some other piece of data) you can specify it with the "startingAfter" parameter in another API call to the client endpoint and get the next "page" of data. You just have to keep getting the token for "Next" and making another call, until you no longer have a "Next" link. Then you have moved though all of the data. If you concatenate all of the results together as you get them, you will have all of the results in a single variable.

 

Here is the function I wrote to do this:

 

# This function takes the API URL and query string. It returns all of the JSON data from the API request that has
# been broken up into pages of data.
def GetPages(url,query):
   data = []
   done = False
   while done != True:
   try:
      ret = requests.request('GET', url, headers=g_header, params=query)
   except requests.exceptions.ConnectionError:
      print('A connection error has occurred while getting the page information.\n')
      return None
# Append the new data to the existing.
   data += ErrorCheck(ret)
# Get the page URLs from the HTTP header of the API call and split it into the URLs.
   pages = ret.headers['Link'].split(',')
# See if there is a page for the 'next' page of data.
   page = next((i for i in pages if 'rel=next' in i), None)
   if page != None:
# Isolate the token and then add it to the new query.
      token = page.split('startingAfter=')[1]
      token = token.split('&')[0]
      query['startingAfter'] = token
   else:
      done = True
return data

 

You would call this with the full endpoint path and the the query info that will go into the GET ({'timespan':'30','perPage':'1000'}). It makes all of the API calls and collects the results into a single variable, which is returned. Now, the call to ErrorCheck that is in there is another of my functions that just converts the raw data to JSON and checks for errors. The variable g_header in the requests call is a global variable that contains my API key information.

 

Hope this helps. If there are any questions about this explanation or the code, please let me know.

View solution in original post

7 Replies 7
NolanHerring
Kind of a big deal

Does this command at the end work?

?timespan=86400

I believe that value is in seconds
Nolan Herring | nolanwifi.com
TwitterLinkedIn
jdsilva
Kind of a big deal

If you're using curl then you add the t0 and timespan parameters into the URL:

 

https://api.meraki.com/api/v0/networks/{networkId}/clients?t0={start time in epoch time}&timespan={seconds}

 

If you're using Python then it depends on if you're using Requests yourself, or if you're using the meraki module or SDK.

 

If you're using anything else I can't help you since I don't know anything else 😞

PhilipDAth
Kind of a big deal
Kind of a big deal

How will specifying a time range allow you to retrieve more clients?

ludwigbery
Getting noticed

@PhilipDAth 

 

Maybe @joincidence  means client list and the coverage of activities. please correct me if I'm wrong. Meraki only provides 30days retention period unless you have syslog server?

CBurkhead
Building a reputation

What @joincidence is talking about is the perPage value you need to specify to get the client list with this endpoint. It indicates how many results will be on a "page". For some reason, Meraki assumes that data from this endpoint will be displayed on a webpage and you will move through it using "Prev" and "Next" links. Calling the endpoint gives you the first page of data. In this case the first 1000 clients. The question is how do you get the rest of the data?

 

I had this same problem and this is what I found. Now, this is using Python, but the data should still be in the same place regardless of the language. In the information that is returned by the requests.request or requests.get is a field called "headers". Inside of this is a another field called "Link" (headers['Link']) that contains all of the information for the "Prev" and "Next" links. In this data is the token values you need to move back and forth through the data. By getting the token (this can be a device ID, timestamp, or some other piece of data) you can specify it with the "startingAfter" parameter in another API call to the client endpoint and get the next "page" of data. You just have to keep getting the token for "Next" and making another call, until you no longer have a "Next" link. Then you have moved though all of the data. If you concatenate all of the results together as you get them, you will have all of the results in a single variable.

 

Here is the function I wrote to do this:

 

# This function takes the API URL and query string. It returns all of the JSON data from the API request that has
# been broken up into pages of data.
def GetPages(url,query):
   data = []
   done = False
   while done != True:
   try:
      ret = requests.request('GET', url, headers=g_header, params=query)
   except requests.exceptions.ConnectionError:
      print('A connection error has occurred while getting the page information.\n')
      return None
# Append the new data to the existing.
   data += ErrorCheck(ret)
# Get the page URLs from the HTTP header of the API call and split it into the URLs.
   pages = ret.headers['Link'].split(',')
# See if there is a page for the 'next' page of data.
   page = next((i for i in pages if 'rel=next' in i), None)
   if page != None:
# Isolate the token and then add it to the new query.
      token = page.split('startingAfter=')[1]
      token = token.split('&')[0]
      query['startingAfter'] = token
   else:
      done = True
return data

 

You would call this with the full endpoint path and the the query info that will go into the GET ({'timespan':'30','perPage':'1000'}). It makes all of the API calls and collects the results into a single variable, which is returned. Now, the call to ErrorCheck that is in there is another of my functions that just converts the raw data to JSON and checks for errors. The variable g_header in the requests call is a global variable that contains my API key information.

 

Hope this helps. If there are any questions about this explanation or the code, please let me know.

joincidence
Getting noticed

Thanks, this is what I needed and I like it much better than trying to specify multiple time ranges. The 'link' header info had 2 URLs and that contains both pages of the users which I can then put into my function and extract the data from.
MaddogJulie
Here to help

If using the requests you set the 'startingAfter' parameter.

 

params = {
'startingAfter': time.time() - 2592000 # 1 month
}

 

This will only get entries up to the maximum allowed.  If you store the data in a variable you can set it in a loop to a different timestamp and get more data

 

params['startingAfter']= <variable>[(len(<variable>))-1]['occurredAt']

 

and continue until your request status_code is not 200

Get notified when there are additional replies to this discussion.