I have 10 MX's that all talk to the same remote device. Many mornings, I will lose some of my site to site VPN connections to the non-Meraki peer.
What can I change/ extend such that the MX's will keep trying longer to maintain the connection even if it is the remote device dropping it? Usually, a power cycle on either end reestablishes the connection.
Hi @MarcW ,
Sounds like your non-Meraki peer is turning the tunnels off due to no activity.
Meraki devices send special DPD vpn packets to keep the tunnels alive when there's no traffic. However, not all VPN peers reply to DPD.
Therefore, the best way to keep site-to-site tunnels UP is generating interesting traffic - e.g.: a continuous ping from a host inside a local subnet into a destination at the other side.
Hi ,
This is the 3rd time I'm seeing this. MarcW , Myself and https://community.meraki.com/t5/Security-SD-WAN/Traffic-not-getting-initiated-from-IKEv1-and-IKEv2-f...
Are you running MX 18.2 by any chances ?
4 of the 5 are due for the update. Coordinating that change with my local admin as we speak.
What we saw were UPLINK notices almost daily for a mix of MXs. When I started looking for events, there were no fails. When I called the local admin, he said he'd been using the reboot as a workaround to address the issue. I was relieved that the MXs weren't "failing" several times a week - lol - but I had to then change my approach after identifying the real issue.
After I update the firmware, we'll see if there are any latency/ packet drop accommodating settings to implement.
How about putting an MX at the other end? 😉
🙂 if only they were our direct client.
Firmware updated - no joy
Continuous ping - no joy
At this point, with the same behavior across 8+ site to site connections to a single destination, dropping the same way (and possibly same happening to other businesses) and needing reboots every other day or so, I am going to conclude it is them, not us. Thanks!!
What version were you running prior to the upgrade ?
18.211
Great. So every one on MX 18.107 or 18.211 seems to be having the same issues. Somehow support doesn't know that it is affecting multiple clients. Curious.