/api/v1/devices/uplinksLossAndLatency reports 100% loss for disabled uplinks


/api/v1/devices/uplinksLossAndLatency reports 100% loss for disabled uplinks



Is it expected for the UplinksLossAndLatency endpoint to report 100% loss for a disabled uplink? This seems odd as that endpoint does not include the operational status of an uplink so it's not easy to filter out disabled ones if used to determine which 100% loss results are actually an issue. 



4 Replies 4
Head in the Cloud

The % field should be the loss/total * 100, but when total is zero it erroneously returns 100%.


Divide zero by zero is not a number, so really the return value should either reflect that with a Null (don't think NaN is a JSON value) or in this context return the more sensible/friendly value of 0%.


In my scripts I check for zero total and set to 0%.

I don't see a "total" value in this response, so I presume you are making a separate API call to fetch that, correct? 




While this makes sense, that's disappointing if that's the only workaround as that's added complexity I'd like to avoid. 

Sorry, I mixed up with a different uplink call that has the 100% behaviour on zero traffic but includes the actual traffic numbers.


The thing is, I also use the org-wide loss-latency call, but I do not see 100% loss on unused uplinks, I just checked on a few different orgs and results aren't showing this behaviour on unused uplinks.


But this org-wide call can return 'odd' results, for instance...




{'networkId': None, 'serial': None, 'uplink': 'cellular', 'ip': '', 'timeSeries': [{'ts': '2023-12-01T00:45:20Z', 'lossPercent': 0.0, 'latencyMs': 65.5}, {'ts': '2023-12-01T00:46:21Z', 'lossPercent': 0.0, 'latencyMs': 70.2}, {'ts': '2023-12-01T00:47:21Z', 'lossPercent': 0.0, 'latencyMs': 62.2}, {'ts': '2023-12-01T00:48:20Z', 'lossPercent': 0.0, 'latencyMs': 69.9}, {'ts': '2023-12-01T00:49:20Z', 'lossPercent': 0.0, 'latencyMs': 64.8}]}




...I have not anonymised that, it's a direct cut/paste, the call has returned actual loss/latency for device serial number None on networkId None. This makes no sense to me, maybe it's a side effect of device add/remove/move vs, timing, but I just code around this sort of thing...


In my script I don't take the data as-is, I exclude samples that have any None value instead of a logically required real value, then use the clean samples to, in my case, calculate min/max/avg over the whole timespan of interest.


Is it possible the 100% loss values you are seeing are relating to uplinks that were once connected? maybe they are getting included in results going forward as if still in use?


I have also noticed similar oddities in that API call and even had a teammate submit a question about that before. I'm asking on behalf of a customer using our integration so I'll have to have them dig into it. Thanks for the information! I'll report back if I find anything. 

Get notified when there are additional replies to this discussion.