Ive got my MX65 connected via 1Gbps ethernet to a 4 port SFP+ 10Gbps switch will in turn be connected to a few other 10Gbps switches via OC4 around my property. Should I change the MTU settings on the switches? I have had the experience before with my synology NAS which connects at 10Gbps to my Sonnet 10Gbps adapter that changing the MTU to a more suitable size significantly (100MB/sec or so) improves my transfer rates over 10Gbps networking. My concern is it will somehow be too big for the MX65 and some of the other smaller PoE+ switches that are daisy chained off the backbone. I literally don't know if this concern is unfounded or not. I guess the clients would also need MTU size increases? WLAN won't get anywhere close obviously but does this create a conflict? Thoughts?
>Should I change the MTU settings on the switches?
This could start a religious discussion. The performance is directly proportional to latency and MTU. If the latency is low (such as over a pure LAN) then going from 1500 to 9000 byte frames makes little differences.
If the latency is high then a larger MTU can make quite a difference.
The only special case is if you are using iSCSI. If you are (say) using iSCSI and using 4K IO blocks then you might consider using a larger MTU so that each disk block can be transmitted in a single packet - but make sure that everything on the path can support that MTU.
Typically unless you are doing something transactional (such as SQL server doing lots of small transactions) you will have trouble measuring the performance difference.
Personally, I avoid using iSCSI in preference to NFS or SMB, because the setup it so much simpler, and generally if you touch none of the settings (and it is a LAN) you'll just get good results.
@PhilipDAth we're on SMB 3.02 per running
smbutil statshares -a
when the share is mounted.
I am using one of these for the switch:
And I have to say so far the experience has been simple and straightforward except this MTU issue. I am getting a 10Gbit link per the software on both devices. When directly connected together without the switch I pull 1gigabyte per second read from the NAS array. When I introduce the switch the highest I might see is 400megabytes per second. So something is limiting the speed and in my experience its usually a few small things. One of those Im guessing is the MTU. I can configure each SFP+ port individually but my setting isn't sticking. Ive emailed Mikrotik to see what they think.
In terms of topology, my internal fiber network on the property is using these switches which have a gigabit nether PoE+ switch connected to them for the Meraki APs. Then the MR65 sits at the WAN above all of that.
I'm going to guess your issue is actually micro-bursting.
I suspect you'll find that 10Gbe switch doesn't have sufficient buffering to handle a NAS actually operating at line rate and is simply dropping lots of the packets. If it has a management interface you might be able to get stats on the number of dropped packets.
You'll probably find this switch is intended for desktops which only operating at 10Gbe at peak for short periods (like opening a file).
Actually I see they publish test results. If you look at the non-blocking layer 2 throughput you can see the capacity is 80GBE (each port is 20Gbe full duplex, 4 x 20Gbe = 80Gbe) but the throughput is only listed as 40Gbe. So the switch backplane only has enough capacity to run the ports at half speed.
ps. You'll probably find deliberately reducing the MTU on the NAS to 1500 may improve the performance as it will reduce the buffer demands on the switch, and result in less dropped packets.
@PhilipDAth OK Ill give that a try re MTU @ 1500.
Lets find out...
@PhilipDAth Wouldn’t there be enough room to run just two of the ports at full speed if the others are unused or is that not how it works?
It's hard to say without knowing the architecture of the switch - but my guess it that it would not help. I think the fundamental issue is that the switch wont have enough "punch" to keep up with the NAS.
Is it an option to get more 10Gbe NICs, and just point to point connect everything (you would need a subnet per point to point link)?
OK great news. This switch has two modes (and two OSes). One is called RouterOS and one is SwOS (switch OS). I had been running mine in RouterOS with bridge mode on between all the ports, but didn't realize that SwOS is literally just a dumb switch which is really what I need. RouterOS mode at jumbo frame size yielded the lower transfer rates. So I threw them into SwOS mode not thinking this would matter than much between two identical ports and changed the MTU to 1500. That yielded roughly similar transfer rates. Then I put them back to jumbo 9k frames and we are back in 1gigabyte per second transfer range again. I played with it - putting the NAS and the Mac back and forth together from 1500-->9000 and it was completely reproducible. 1500 yielded 500-600MB/sec, and 9000 yielded 900MB-1GB rates. The switch's backend GUI confirms 8.83Gbit/sec xfer rate which is the highest Ive seen.
...except while the transfer speed is high, playback was weirdly out of sync when reading files over the network, I’ll try rebooting both devices and trying a variety of files.
Discovered this method to calculate MTU
8192 the winner. Gotta remember that 28byte overhead!