- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Site to Site VPN Throughput Issues but Client VPN is fine (very fast)
I'm having odd VPN throughput issues. I have a hub location with a MX84 and a remote site with a MX65 using the Meraki Auto VPN (within the same organization). The MX84 is also setup with Client VPN access. The site to site VPN and Client VPN configurations do not permit split tunneling and default route all traffic to the MX84 at the hub location.
Here's odd behavior... The Client VPN from the same location to the MX84 with the site to site VPN turned off gets 20-30mbit of throughput, but the site to site vpn tunnel only gets 1.5 to 2 mbits of throughput. Fragmentation and retransmissions do not appear to be an issue.
The MX84 has a residential ATT Fiber connection 1gig bi-directional. MX84 supports 250 mbits of combined encryption.
The MX65 has a university connection with about 50-60 mbits down and 30-40 mbits up. MX65 supports 100mbits of combined encryption.
I've started to wonder if ATT is rate limiting the site to site (IPSEC) packets vs the client vpn which uses L2TP with IPSEC. The MX65 has been setup with all security features turned off allowing it to be performed centrally by the MX84, but that didn't improve performance. I've also attempted to change the MTU size of a workstation to 1350 without any improvements.
Ideas?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
L2TP over IPSec looks exactly the same as IPSec to a provider. It is all encrypted.
I'm going to guess an MTU squeeze. Try this experiment; use this command on one machine at one of the sites to display all your interfaces:
netsh interface ipv4 show interfaces
Note the interface number that is configured with the IP address used for communication. Lets pretend it is interface 10.
Then issue this command (change 10 to your interface number):
netsh interface ipv4 set subinterface "10" mtu=1400 store=persistent
To undo this repeat the command with an MTU of 1500
If this solves it, then do this command on the servers that everyone connects to (rather than having to do it on all the workstations).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I thought L2TP used port UDP 1701 and IPSEC uses ports UDP 500 and 4500. The L2TP would be unencrypted with an encrypted IPSEC payload. The ISP / ATT could be performing rate limiting on the UDP 500 and 4500 ports and not L2TP.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Lets see if you have an issue with asymmetric timing of the TCP connection (usually caused by circuits with different send and receive speeds).
Could you please try enabling timestamps with the below command and repeating your test.
netsh int tcp set global timestamps=enabled
You can undo the setting by setting it back to disabled.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
https://rolande.wordpress.com/2010/12/30/performance-tuning-the-network-stack-on-mac-osx-10-6/
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I enabled the timestamps then tested with timestamps and other features enabled without a significant improvement. It went from .5mbits down to 1.8mbits down with multiple back to back tests. Client VPN would achieve a download speed of 20-22mbits during the same time period. Upload performance showed a similar variation. Which makes me think the MX65 has a hardware issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I finally had time to figure out the issue. Meraki AutoVPN uses a non-standard implementation of IPSEC. AutoVPN uses the control port of 9350 UDP to register with the cloud then dynamically picks a transport port. I had to manually specify the transport port under the NAT traversal option then work with my local IT staff.
The traffic was rate limited due to being mis-categorized. Once they used my source/destination IP and static Port information to categorize traffic the performance went from 1.2 Mbits down, .9 Mbits up to 30 Mbits down and 34 Mbits up.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I would start from checking the pings between your sites. See what is the time in ms?
After that I would check Group Policy and Traffic Shaping to make sure that everything is setup correctly
Check also Site-to-site outbound firewall and Site-to-site inbound firewall
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The latency is about 254 ms between the sites.
