IPsec with eBGP to Azure Virtual WAN

InfraSE2020
Here to help

IPsec with eBGP to Azure Virtual WAN

Hi, 

 

We have setup an IPSec tunnel from our MX to Azure vWAN (VPN gateway) which is working fine when using static routing. We are now trying to make use of eBGP over IPsec however were not having much luck.

 

The VPN gateway has the following configuration, the public IPs have been changed for obvious reasons. 

"bgpSettings": {
"asn": 65515,
"peerWeight": 0,
"bgpPeeringAddresses": [
{
"ipconfigurationId": "Instance0",
"defaultBgpIpAddresses": [
"10.104.0.13"
],
"customBgpIpAddresses": [],
"tunnelIpAddresses": [
"4.251.52.228",
"10.104.0.4"
]
},
{
"ipconfigurationId": "Instance1",
"defaultBgpIpAddresses": [
"10.104.0.12"
],
"customBgpIpAddresses": [],
"tunnelIpAddresses": [
"132.123.66.99",
"10.104.0.5"
]
}
]
},

 

Whats confusing is what we set on the Meraki end. We logged a ticket with support and they wasn't much help but advised the following:

 

"Looking at the backend logs on the MX and what's on the dashboard, the peering stage is stuck at Connect. This indicates TCP 179 handshake is failing.

Even though it is outside Meraki Support's scope to look at other vendors' configurations and I personally don't know much about Azure, looking at below my guess is you might have to use the subnet from 10.104.0.13. It would be a /30 shared subnet and both the Z3 and the Azure peer should both be in that. According to online subnet calculator, if Azure is using 10.104.0.13, then Z3 should be 10.104.0.14". 

 

IPSec Subnet: 10.104.0.12/30

BGP Source IP: 10.104.0.14

BGP Neighbor IP: 10.104.0.13

Remote AS: 65515 

 

Unfortunately we cannot get any meaningful logs our of the MX to see what might be causing the issue, has anyone else set this up that can point me in the right direction? 

 

TIA. 

7 Replies 7
PhilipDAth
Kind of a big deal
Kind of a big deal

I was unable to get this working with Amazon AWS either.

 

I believe this feature was designed primarily for use with Cisco Catalyst SD-WAN integration.

MilesMeraki
Head in the Cloud

This is a shame, so no eBGP working with an AWS Virutal Private gateway? 

 

The MX's Non-Meraki VPN's don't play nice with the AWS VPG dual tunnels and in a static scenario, you have to bring down a tunnel manually or use some API call to do it. 

 

I was hoping the eBGP feature here would resolve that..

 

Eliot F | Simplifying IT with Cloud Solutions
Found this helpful? Give me some Kudos! (click on the little up-arrow below)
Tony-Sydney-AU
Meraki Employee All-Star Meraki Employee All-Star
Meraki Employee All-Star

Hi @InfraSE2020 ,

 

Thanks for bringing this topic here. Feel free to share the support case number with me over private message here in community portal.

 

In the meantime, be sure to check this document and make sure your MX is running the most recent firmware version 19.2.2 that supports Multihop bgp session.

 

In addition, your idea makes total sense. I checked this Microsoft document and it seems to me that first IP within the /30 subnet would belong to Azure. But maybe it's the opposite.

 

Did you test configuring MX BGP interface as 10.104.0.13?

If you found this post helpful, please give it kudos. If my answer solved your problem, click "accept as solution" so that others can benefit from it.
InfraSE2020
Here to help

Hi @Tony-Sydney-AU 

I have followed the guide you provided but unfortunately it hasn't helped. 

I have tried setting my source IP to 10.104.0.14 and the neighbour as 10.104.0.13 but it still doesn't work. 

 

InfraSE2020_0-1754898726787.png

 



I've PM'ed you the ticket number to see if you can provide any guidance. 

Thanks
Ashley

Tony-Sydney-AU
Meraki Employee All-Star Meraki Employee All-Star
Meraki Employee All-Star

Hi @InfraSE2020 ,

 

Yes, so it doesn't seem to be related wit inside tunnel IP inverted.

 

Thanks for sharing the support case info over PM. I was able to look at your non-Meraki VPN peer and fount the remote-network seems to be incorrect.

 

As per the document I mentioned before, "Non-Meraki peer must support route-based VPN" is needed along with the other requisites.

 

So in route-based VPNs the remote networks is always 0.0.0.0/0. However, you have Azure's defined IPSec Subnet: 10.104.0.12/30 in your config.

 

I believe this is creating an issue with the way the inside tunnel virtual interface communicates. Hence, BGP state never established.

 

In addition, don't worry about remote subnet 0.0.0.0/0 as it doesn't create a default route in this case. It is there just to allow the inside tunnel (a.k.a. VTI) and all routing is controllerd by routing engine together with BGP.

 

By the way, this topic reminds me of another requisite: "When establishing an eBGP peering on the WAN in routed mode, NAT must be disabled on that uplink (see NAT Exceptions with Manual Inbound Firewall) otherwise the subnets advertised by the MX will be behind NAT and unreachable via eBGP subnets upstream." (document source here)

 

So here's another config that you need to check: be sure to open menu Organisation -> Early Access and then opt-in (enable) "NAT Exceptions with Manual Inbound Firewall"

 

I also shared my findings with my Support colleague working in your ticked. Maybe you get a reply from him first.

 

In summary, change the remote networks to 0.0.0.0/0 in your VPN peer settings and enable NAT exceptions. Doing this would fix your BGP.

If you found this post helpful, please give it kudos. If my answer solved your problem, click "accept as solution" so that others can benefit from it.
InfraSE2020
Here to help

Hi @Tony-Sydney-AU - thanks for the update. 

I have opted in to disable NAT on the uplink and ticked the option for uplink 1 however i am having an issue with the peer settings. 

 

I am unable to either remove and/or change the 10.104.0.12/30 address as its a requirement to enter a /30 address in the IPsec subnet field on the VPN peer settings. 

Any ideas on how i can achieve this? 

 

 

Tony-Sydney-AU
Meraki Employee All-Star Meraki Employee All-Star
Meraki Employee All-Star

Hi @InfraSE2020 ,

 

Sorry about this delay. I checked and found it was my confusion. My apologies.

 

Your configs were correct. I noticed you changed inside tunnel subnet from 10.104.0.12/30 to a 169.254.X.Y/30 and the behaviour was the same.

 

I did a lab at home and I was able to reproduce the issue. I have an MX67 and found the exact same behaviour you're having.

 

So I had to check further with Internal Teams and found there are some ongoing work in firmware 19.2.2 regarding BGP engine and the inside tunnel interface.

 

And then Today I saw that you changed your VPN with AWS to a Static Routing and remote network is a summary / supernet that contains all your VPC subnets.

 

It seems to be working fine and that's a good workaround at the moment.

 

Let's keep an eye on the firmware release feed and check if the next MX firmware gets BGP flowing through the inside tunnel interface in a better way.

If you found this post helpful, please give it kudos. If my answer solved your problem, click "accept as solution" so that others can benefit from it.
Get notified when there are additional replies to this discussion.