-Windows VMs running on Cisco B200 M4 blades
-Storage is connected via multipath 8 GB/s fiber to Pure1 storage LUNs
-Networking is connected through the Cisco UCS switches to the MS350 switch stack (8 members) via dual 10G fiber links, vSphere is setup to utilize multiple uplinks per virtual switch
- The MS350 stack is doing layer 3 and the only reason traffic would need to hit our active/standby MX100's is if they need to talk to one of our other sites, or the internet. No traffic shaping is in place which would purposely limit per-client bandwidth.
- Our Pure1 array is barely being hit. IOPS are in the 10k-20k range on average, latency under 2 ms, bandwidth usually under 100 MB/s. It's capable of 3 GB/s.
I've been trying to figure out why file transfers between the servers are only getting up to around 150 MB/s. I have tried tweaking advanced network settings like tcp/ip offload, etc etc, made sure there are no mixed up MTU sizes anywhere, but I would think for a 10G network running on flash storage would be quicker than this. Just trying to figure out where the bottleneck is and where I might want to look next. Thanks!
Is file copy on the server (from one drive to another) fast?
Are the file servers that you are copying to/from on the same subnet? if so then the L3 is not relevant.
Just tried a file transfer, same result, stays steady between 150-155 MB/s. And the server I was pulling from is our main file server with all of the tweaks applied to its virtual ethernet connection in the advanced settings. And I have tried pulling files via a brand new un-touched vmxnet3 adapter with no difference in performance.
So it looks like a NIC (virtual or physical) issue as no other hardware is involved that is different to the transfers inside the VM.
I'll check our transfer speed tomorrow as we also have Windows VMs with dual 10GbE NICs and (16Gb) FC connected Pure1 flash arrays.