Sorry, I mixed up with a different uplink call that has the 100% behaviour on zero traffic but includes the actual traffic numbers.
The thing is, I also use the org-wide loss-latency call, but I do not see 100% loss on unused uplinks, I just checked on a few different orgs and results aren't showing this behaviour on unused uplinks.
But this org-wide call can return 'odd' results, for instance...
{'networkId': None, 'serial': None, 'uplink': 'cellular', 'ip': '8.8.8.8', 'timeSeries': [{'ts': '2023-12-01T00:45:20Z', 'lossPercent': 0.0, 'latencyMs': 65.5}, {'ts': '2023-12-01T00:46:21Z', 'lossPercent': 0.0, 'latencyMs': 70.2}, {'ts': '2023-12-01T00:47:21Z', 'lossPercent': 0.0, 'latencyMs': 62.2}, {'ts': '2023-12-01T00:48:20Z', 'lossPercent': 0.0, 'latencyMs': 69.9}, {'ts': '2023-12-01T00:49:20Z', 'lossPercent': 0.0, 'latencyMs': 64.8}]}
...I have not anonymised that, it's a direct cut/paste, the call has returned actual loss/latency for device serial number None on networkId None. This makes no sense to me, maybe it's a side effect of device add/remove/move vs, timing, but I just code around this sort of thing...
In my script I don't take the data as-is, I exclude samples that have any None value instead of a logically required real value, then use the clean samples to, in my case, calculate min/max/avg over the whole timespan of interest.
Is it possible the 100% loss values you are seeing are relating to uplinks that were once connected? maybe they are getting included in results going forward as if still in use?