MX Routing w/ 2 Hubs

johnrussiv
New here

MX Routing w/ 2 Hubs

Hello

 

We have 2 Data Centers that are connected to each other with dark fiber. We also have about 50 remote sites that connect into the DCs using Meraki MXs. There are dual MX 250s at the data centers as hubs and then MX64 up to MX105 at the edge sites.

 

The majority of our production happens in DC1, that is the preferred hub by the edge MXs, and 95% of traffic prefers that hub. We have a few VLANs in DC2 that have prod servers and some sites prefer to get to DC2 using that hub and not the DC1 hub. I need to control this and stop that. We use OSPF to distribute the route info of the remote sites connected to the hubs.  

 

I would like to have both the hubs set in the remote MXs and set the preference to DC1 and leave DC2 as failover. But some sites that have that setting will send the DC2 local network traffic directly there and cause async routing. 

 

I am looking to find out what the best practice is?

a. Do I need to even have local networks programmed in the MXs at the hubs since we run OSPF? 

b. If local networks are needed should they only be on the Primary hub?

 

 

 

 

2 Replies 2
MartinLL
Getting noticed

If possible i would consider switching to BGP.

If that is not possible you would need a way to advertice your DC subnets in to the Auto VPN. This can be done by using static routes or local networks. OSPF on the MX only advertice networks to OSPF peers, it does not learn routes from OSPF peers.

 

In your case i would opt for local networks. Add DC1 and DC2 networks to both your hub pairs and make sure that there is routing for all networks across your dark fiber.

 

This way the hub MX highest on the priority list under site to site vpn will always be selected. In your case DC1. If DC1 fails all traffic flows to DC2 across your Auto VPN.

Seconding BGP between the concentrators and your core(s)! We have a very similar setup - 2 DC's each with 2 hubs (each hub on a different ISP), DC's connected via dark fiber, with ~200 spoke sites participating in the AutoVPN. All the spokes are full-tunnel AutoVPN.

 

We have BGP stood up between the concentrators and the core L3 switches at both DC's, with the core L3 switches advertising their local supernet into the AutoVPN and receiving routes to the spokes from the AutoVPN. The BGP routes are redistributed into EIGRP, and the core L3 switch at both DC's share routes with eachother via EIGRP. We control/massage load by adjusting the hub priority on a per-site basis to spread the love between both datacenters (and between the concentrators/ISP's at those datacenters).

 

Prior to standing up BGP between the concentrators and the core switches, we were using OSFP and the 'local networks' to advertise the DC supernets into the AutoVPN. However, in this setup, if we lost one or both of the concentrators at one DC, the routes through that VPN concentrator never dropped out of AutoVPN and caused a black-hole situation. This may have been a configuration issue on our side, not sure.

 

One gotcha I remember having to chase up was getting Meraki support to stop the hubs from routing between eachother over the AutoVPN (instead of dumping traffic to the L3 core and letting it route across the dark fiber).

Get notified when there are additional replies to this discussion.
Welcome to the Meraki Community!
To start contributing, simply sign in with your Cisco account. If you don't yet have a Cisco account, you can sign up.
Labels