I have regarding a meraki API call. The API call in question is lossAndLatencyHistory, I am trying to figure out how the loss number is determined. I have included a sample output below from the lossAndLatencyHistory API call.
What I am looking for is where it says the loss is 3.3, what is that determined from? Is it based on pinging the IP listed or some other test? How many tests are done in what period of time, is it just a simple calculation of X test passed divided by Y total tests? The fact that these are giving numbers like 3.3 tells me its not something like 10 tests in 60 seconds where like 1 failed as you would get nice round numbers rather than what I am seeing below. I have searched for documentation on how this number is calculated and from what, but haven't found anything with specific details.
Id also like to understand if the latency is just a basic ping response from the given IP for latency calculation, if that is an average latency from a number of tests for that reporting period, just a single test and if its round trip time or just one direction latency?
"ts": "2020-01-21T18:23:05Z",
"lossPercent": 3.3,
"latencyMs": 295.3