Going crazy here, desperately looking for some help. We have 5 terminal sessions, with users using thin clients to RDP to them. With Cisco 3560g switches, we never had a problem. We switched to Meraki switches, and brief disconnects occur at random times. The users are booted from session, and when logged back in, the session continues right where they left off. We noticed our physical hosts (where the TS reside) had MTU set to 1500, so we changed Meraki MTU settings to match, at 1500. This stopped probably 80% of disconnects. But can't seem to find how to make the disconnects completely go away. I see some TCP fragmentation coming from terminal server to client, with packet size as 1518. Perhaps i need to raise the MTU on MEraki side to 1520 to account for header?
No CRC errors that we can see. We changed uplink cables, SFP connectors, and everything else we can think of.
Any ideas? Thanks all
You should not be needing to mess with MTU settings for RDP servers in a local LAN environment,
It's not beyond the realm of possibility as an issue, as the 3560's used a 1500 byte MTU, while all Meraki switches support jumbo frames (around 9000-byte MTUs).
The MTU is negotiated between the client and the server and the smallest of the two is selected. It is highly unlikely your clients have an MTU of anything other than 1500. So configuring a higher MTU on the server is likely to have zero impact. Configuring a lower MTU will result in a small MTU overall, but there is not a high probably of this altering your problem.
The only recent change you mention was the switches. Have you double-checked the port connections to the server are operating at full-duplex, and that there are no Ethernet errors to speak of? Ideally, configure everything for auto/auto.
Have you double-checked the connections to the clients, to make sure they are auto/auto and negotiating a full-duplex connection, and that there are no Ethernet errors reporting on the MS switch port?
Anything in the Meraki event log for switches? Any spanning tree issues reported, port problems, or anything else interesting? Duplicate IP address warnings?
If you test it off one of your thin client devices and a Windows computer - are both clients equally affected?
Are there any firewalls in between the users and the servers? If so, is it reporting anything interesting?
If all else fails, you are going to need to take some packet captures at the server and client end to understand what they are seeing.
I'm running a full catalyst switching stack, with Meraki MX interconnecting each location using the Autovpn, ever since implementing the Meraki's with Auto VPN my RDP connections are having connection issues. I perfomed wiresharks and the packets is the wireshark look to be at 1240 to 1400 in size. I'm tempted to manaully redice the MTU on my test devices to see if it makes a difference, but at least per wireshark MTU shouldn't be a problem. I'll update soon.
Hey @Kevin19 , were you ever able to resolve this issue? We're experiencing this same exact issue but with our Virtual Machine environment between our Thin Clients (plugged into MS350 Switches in a stack) and our VMware Virtual machines running on a blade server within our remote Data Center. We get a similar issue, where random Thin Clients will freeze up for about 3 minutes and most of the time get disconnected from the session when it happens. The logs don't show any errors and the ports that the Thin Clients are plugged into don't display any issues and remain up. This only happens when users connect to their VMs from the Thin Clients within the office, if they connect from anywhere else (home for example), this does not occur. I haven't been able to catch the issue at the time it occurs on a packet capture yet (especially since it's super random, but getting more frequent), but I'm not so sure that I'll find anything of interest. Anyone else experiencing random disconnects on MS350 Switch Stacks running Layer 3?
We spent weeks doing packet captures and all sorts of troubleshooting. We finally were able to trace the disconnects to a bad fiber line between a set of switches. RDP thin clients just happened to be more "sensitive" to the network stability issues we were having compared to thick clients.
Ahhhh...not the response I was hoping for haha lol. J/k, but I am glad that you were able to resolve it. Definitely agree with the statement that the Thin Clients being more sensitive to network stability issues. Were all of your RDP sessions being disconnected all at the same time? Or was it on a port by port basis? If you can recall. Because our issue unfortunately is on a port by port basis and so far, it seems random at the moment.
Thank you for your prompt response and one again, I'm happy that you were able to resolve your issue.
FYI, the issue I was facing turned out not to be Meraki, or MTU or routing at all. I resolved my issue with this:
Take action: Out-of-band update to address a Windows Server issue
In my case it was port by port, but the disconnects would last ~20 seconds, then reconnect quickly. Was there any firewall changes made perhaps? Or new security measures/services on the LAN policy, since it doesn't seem to be happening when coming from home>lan?
Ok, yeah, the symptoms you're describing are similar, but not quite the same as what we're experiencing. What we're seeing is that the sessions would freeze (can't move the mouse within the VM window) for a good 30 seconds - 1 minute and then most of the time disconnect the session and the users would have to remote back in. No changes made at all. This all started when I upgraded to MS 11 (can't remember the exact version) back in late 2019. We didn't have this issue in MS 10. And I thought MS 12.14 had resolved it (upgraded in the middle of the pandemic), but I guess there weren't enough users in the office to tell, up until now.
Oh, and I think it's important to note that the issue does persist after a Switch reboot and it did go away when we had reverted back to MS 10. Which is why I had believed it to be some weird Firmware bug. Now we're on 12.14 and I'm really not sure where to go from here.