I generally avoid trying to replicate what is seen in dashboard.
As well as issues like the one @RaphaelL mentions, there are some practical factors with trying to match anything quantitative...
- dashboard's method of calculation isn't published, you must infer/guess when trying to replicate
- dashboard may be using data that is not available via the API
- dashboard may be using data with different time resolution/bucketing
- is dashboard always doing the calculation on demand, or are some things cached for efficiency
- are the start/stop times dashboard uses the same as used in your calculation, when you operate across many time zones it can be unclear
- if you tell the customer that your report x is the same as dashboard report z, if there is any difference at all, it can destroy confidence in both, the natural perception will be that one (at least) is wrong
- etc.
In general, for performance analysis/comparison at scale, it is simpler to ignore dashboard, besides it would take too long to look at dashboard details for hundred of networks manually.
As long as our calculations based on API data are reasonably accurate (which does requires testing carefully!), we will use them for reporting and analysis. That is enough to identify any issues, then dashboard can be used to drill down to determine the cause and resolve it.
For wifi onboarding issues, we calculate from the API data across hundreds of sites for each AP and SSID, the resulting excel file has a few thousand rows, with filters/sorts it's then easy to identify any problems, some reports have code to traffic light cells with values out of range.