Hi.
I have just set up a Meraki VMX100 in my Azure cloud. We are using this to create a site to site vpn with our on premise MX64 (soon to be upgraded to an MX84). The documentation I obtained from Cisco was a little outdated, the biggest issue being that you can't bind the routing table that you create with the VMX100 nic as it's a managed application and there is a system configured denial even if you are the owner. I did create a routing table and bound it to the vnet that the VMX100 is associated with. The VMX is not routing correctly, though although the routes I set up have been shared with the on premise MX64.
The site to site tunnel is up, running and passing traffic. From a physical machine on premises, I can tracert to the VMX, using its internal, non public IP address. This is exactly what I expected and what should be happening.
>tracert 10.0.9.4
From on premise physical server to AzureVMX100
Tracing route to 10.0.9.4 over a maximum of 30 hops
1 <1 ms <1 ms <1 ms 10.0.0.1 On premise Meraki
2 20 ms 21 ms 20 ms 10.0.9.4 IP of VMX100
Trace complete.
I have 2 vnets that have virtual machines on them that the VMX is supposed to be routing to. One is a subnet of the vnet the VMX is on, 10.0.9.32/28 and the other is 10.0.8.32/28.
Unfortunately, when I do a tracert to a VM that is on 10.0.8.32 subnet, I get this. The first hop goes to my on premises Meraki, as it should. But then some sort of weirdness happens...I personally did not configure 6.12.83.237, I had to research to even see where that IP was coming from. The VMX is controlled by an Azure Managed App and prevents users from accessing it directly so I can't even see where this is coming from.
tracert 10.0.8.37
From on premise physical server to Azure VM
Tracing route to 10.0.8.37 over a maximum of 30 hops
1 <1 ms <1 ms <1 ms 10.0.0.1 On premise Meraki
2 22 ms 21 ms 20 ms 6.12.83.237 ??? some Azure Public IP
30 * * * Request timed out.
Trace complete.
For troubleshooting purposes, I created a test VM in the same vnet , different subnet, as the VMX100, 10.0.9.32/28. I created a new routing table and bound it to the NIC of the test vm. I did another trace. This time it bombed out totally, for whatever reason, never making it past my on premises MX64.
>tracert 10.0.9.36
From on premise physical server to Azure test vm
Tracing route to 10.0.9.36 over a maximum of 30 hops
1 <1 ms <1 ms <1 ms 10.0.0.1 On premise Meraki
30 * * * Request timed out.
What started all of this was my need to RDP with the Windows RDP client as opposed to the Azure RDP which only supports copy and paste, no file transfers. I get different results, depending on which RDP client I'm using. The Azure RDP client tells me another computer has disconnected me from session to the VM I'm trying to connect to. The MS RDP client just tells me the VM is unreachable. (probably because the routing isnt correct)
So, is there anyone here who has set up a VMX100 in Azure, created a site to site vpn, and is trying to route traffic to different vnets once the packets go thru the site to site tunnel? At this point, the VMX isn't even routing traffic to machines in its own vnet
ANY insight would be welcomed and appreciated. I have been struggling with this now for 2 weeks, including weekends and I am at a loss as to what else to do.
Thanks in advance!
Sharyn