Solved! Go to solution.
Here is the Python function for doing this.
# This function takes the API URL and query string. It returns all of the JSON data from the API request that has
# been broken up into pages of data.
def GetPages(url,query):
data = []
done = False
while done == False:
try:
ret = requests.request('GET', url, headers=g_header, params=query)
except requests.exceptions.ConnectionError:
print('A connection error has occurred while getting the page information.\n')
return None
# Append the new data to the existing.
data += ErrorCheck(ret)
# Get the page URLs from the HTTP header of the API call and split it into the URLs.
pages = ret.headers['Link'].split(',')
# See if there is a page for the 'next' page of data.
page = next((i for i in pages if 'rel=next' in i), None)
if page != None:
# Isolate the token and then add it to the new query.
token = page.split('startingAfter=')[1]
token = token.split('&')[0]
query['startingAfter'] = token
else:
done = True
return data
The call to this would be something like:
clientlist = GetPages(url,querystring)
Where url is the full URL to the endpoint and querystring would be the parameters you want, like {'timespan':'30','perPage':'1000'}
The first call results in a first batch of results and will also give you the values for starting_after and ending_before for your next call.
Thanks for the reply are those values are going to be in the header and is it possible to access the header using result in python API
Yes. Well, in a way.
In the headers there will be a link field, as shown below:
In this example the "first" link brings you back to the beginning with startingAfter=a000000.
The "next" link brings you to the next set of - in this case 3 - results with startingAfter=k70307.
Here is the Python function for doing this.
# This function takes the API URL and query string. It returns all of the JSON data from the API request that has
# been broken up into pages of data.
def GetPages(url,query):
data = []
done = False
while done == False:
try:
ret = requests.request('GET', url, headers=g_header, params=query)
except requests.exceptions.ConnectionError:
print('A connection error has occurred while getting the page information.\n')
return None
# Append the new data to the existing.
data += ErrorCheck(ret)
# Get the page URLs from the HTTP header of the API call and split it into the URLs.
pages = ret.headers['Link'].split(',')
# See if there is a page for the 'next' page of data.
page = next((i for i in pages if 'rel=next' in i), None)
if page != None:
# Isolate the token and then add it to the new query.
token = page.split('startingAfter=')[1]
token = token.split('&')[0]
query['startingAfter'] = token
else:
done = True
return data
The call to this would be something like:
clientlist = GetPages(url,querystring)
Where url is the full URL to the endpoint and querystring would be the parameters you want, like {'timespan':'30','perPage':'1000'}
A couple of notes, the ErrorCheck function that gets called is my internal function that converts the returned data to JSON and verifies there are no errors. It returns the data from the endpoint or a value of None.
Also, I have had to modify this function a couple of times depending on the data that the endpoint is returning. It was originally made for getting client data, which was just lists of data. You might have to alter how the data is appended if you are getting a dictionary with lists in it or some other format.
It does work as written for networks/{networkID}/clients.
Thanks that great. Let me try and see if it works