Hello,
After a full deployment and site survey ensuring no CCI, I recently ran some speed tests for a client on their new deployment of MR53 APs.
The APs have been deployed with 40MHz channels and the ones under test had no other clients associated. On an iPhone 8 (2x2 802.11ac) the application throughput was 150/300Mbps(down/up) give or take a few Mbps at each location. I'm assuming this could be Airtime Fairness in action but I want to confirm before reporting this to the client.
It's my understanding that it's possible for the AP to control downstream traffic but not upstream which is what would make sense to me. I'm just a little skeptic as to why the AP would be reserving airtime resources with no other clients connected. Haven't had a chance to test this any further, time on site is limited.
I hate including speed test results with reports because it tends to overshadow the capacity and reliability aspects of a good WiFi deployment, but I'm not able to dodge it in this case. Just hoping I can accurately explain to the customer what's going on.
Hard to say exactly what is going on without further testing but generally speaking wifi is best effort. There is almost inevitably some loss that isn't experienced with a standard wired connection and that heavily contributes to performance.
I understand the differences between WiFi and Wired speeds. Is there anything specific you would recommend testing against to determine the cause of the significant difference between downstream and upstream throughput?
When possible I try to setup an iPerf test to a known (or owned) endpoint to avoid the unpredictability of online speedtests.
We own the speedtest server that was used during testing. Also confirmed no congestion along pathway from customer to server. Speed tests from 1Gbps wired client connection were 940/940.
So, on the same network, a wired client gets 940/940 but wireless 150/300?
Correct
Did you have test results from the 160 MHz channel or you are only using 40? Sorry for all the questions, just trying to brainstorm why there would be such a differential. I've been skimming these resources for possible solutions.
https://meraki.cisco.com/lib/pdf/meraki_datasheet_MR53.pdf
We used 40MHz exclusively, no 80s. I'm not sure you can even configure 160MHz in the dashboard right now (I understand why you'd want to prevent customers from doing this).
Also, I strongly disagree with that article and Meraki should consider pulling and rewriting it. I have a feeling the support department probably has a hard time explaining to customers why they cant achieve application throughput that equates to even half the transmission rates listed in this article.
It's titled "Wireless Throughput Calculations and Limitations" and does not speak to throughput, it's specifically talking about transmission rates which do not compare to throughput at all. "Theoretical throughput" isn't an accurate statement either as it does not account for known mac layer overhead, inter frame spacing, etc.
The "throughput" listings are not throughput, they are raw transmission rates based on MCS and # of spatial streams.
My apologies if the wording above sounds a little harsh, it's not my intention to be rude. I'd just like to see more non-expert vendor communications that explains the difference between WiFi transmission speeds and what a user would consider actual throughput.