Hello Community,
We were forced into swapping out the vMX100 for a vMX-M, and now we cannot reach the servers on AWS. It seems to be an issue with the AWS server instances' route back to the vMX-M.
1) YES. autoVPN is up - able to ping LAN interface of redeployed vMX-M over autoVPN.
* this shows that the main office can reach the AWS subnet connection directly on vMX
2) YES. local connectivity good within AWS - able to ping servers from vMX-M
* this shows that the vMX is in the correct subnet - local connectivity
3) YES. static AWS route changed to new to vMX-M instance for all 10.0.0.0/8 (private)
* this should have been the fix. Meraki Support also thought this should be enough.
4) NO. Cannot ping servers on other side of vMX-M from the main office over autoVPN.
* this is the issue. The servers do NOT know how to route back to the main office (over autovpn)
Note-1: Meraki Support verified what I am seeing via packet captures.
Note-2: I have console access to one of the AWS servers. Same result. Yes. This AWS server can ping all devices in the AWS server subnet including the vMX firewall. NO. It canNOT ping or traceroute back over the autovpn to the main office. Employees cannot connect to AWS servers.
Very simple setup. Meraki documentation does not help with the forced redeployment of a vMX-M. What is the issue? How to get the AWS servers to recognize the new vMX firewall?
Thanks in advance to anyone who can help solve this. I thought this was going to be a 15min upgrade. 😰