Afaik the published limits are fixed, if you try out of bounds values the result is typically an error, if there's not an error I wouldn't trust the accuracy of data anyway.
I've found when using short intervals that results can be inconsistent...
The call, the internal API service, and the underlying data source(s), aren't in sync, also I suspect some info is held in pre-determined internal time buckets, rather than a time-series of samples that can be dipped into between arbitrary start-end times, so that when you request a short period that doesn't exactly match bucket start-end, you get two returns - one from the bucket containing the start time, and one from the bucket matching the end time.
I'd say you are running up against edge effects caused by a combination of these things.
If you want to stick to the 'latest sample' approach, In this situation my suggestion is to assume the last sample (irrespective of number returned) is the 'truth', and go with that.
The chance of numbers being literally correct is low, stuff on the network takes time to reach Dashboard, there's imperfect device time sync, info gets delayed/lost due to uplink issues, etc.