We are Meraki Full stack - MX, MS, MR, MV etc
We have a HQ site with a production virtual server infrastructure. (Dell VXRail)
We have an identical DR site in a data centre 30 miles away.
We have a leased line and backup line at each site and a WARM spare HA pair of MX's
We've got Auto VPN connecting all sites.
We tried the NAt translation for DR and re-ip'ing etc and never really got it working for a few VM's that HAD to retain their IP address.
So we bit the bullet and actually got a cheap Layer2 p2p QinQ link between HQ and our DC.
Question now is how Meraki handles the layer 2 do we join all the meraki hardware in the DC to the HQ network in Meraki or do we trunk/route certain VLANs across the link or do we stretch the VLANs?
Basically we want server A 10.10.10.1 to move from HQ to DR and stay on 10.10.10.1 (forget the gateway for now)
At present the 10.10.10.x subnet at HQ is called VLAN10 'server vlan'
In our DR site we also have a server VLAN but it's VLAN 20 and is on 10.10.20.x/24
I know that a layer 2 link is essentially no different now from the other distribution/Edge switches in HQ that hang off our core switches. e.g our Finance switch is connected with fibre in the to core switch in HQ and this is 100m away....in theory the DR switch and infrastructure is technically now just the same but is 30 miles away...?
Normally everything between sites ran over the auto VPN for inter site traffic i.e backups, replication etc etc
What's the best way to get this running over the Layer2 p2p link now and not over the auto VPN?
As you said, your new design is layer 2 so the provider hands it off as layer 2 info.
So you could perfectly add the switch in the DC to your HQ network.
I don't know where your gateway lives (on your core MS or on your MX) so make sure both those interfaces for vlan 10 and 20 live on the same device.
Also for your original design without the layer 2 link. If your local DC would have been behind a concentrator mode MX and your DR site also then you could have had equal subnets at both sites.
Thanks - our GW for clients and servers is on the core switches at each site with a 0.0.0.0 route to the MX's for internet bound traffic.
Logically I can see how and why we could move the solitary switch in our DC to the HQ but it scares me doing that.
Never used the concentrator mode and not sure i fully understand it. We did use Meraki client VPN but now we use Cisco Anyconnect to an FTD NAT'd behind the MX's (which isn't the best setup).
>Question now is how Meraki handles the layer 2 do we join all the meraki hardware in the DC to the HQ network in Meraki or do we trunk/route certain VLANs across the link or do we stretch the VLANs?
I would stretch all the VLANs.
I would normally make the MX at the DR site a warm spare for the MX at the primary site - HOWEVER when I do this, I always use a pair of P2P circuits (and usually use LACP to join them together). This is because if that single P2P circuit fails, the warm spare MX will think that the primary MX has failed, and they will both go live - most likely causing an outage.
I think I would put the switches at the DR site in a separate network. The main reason why I would use a separate network for the DR switches is so that automatic firmware upgrades could be scheduled on different days. Otherwise when a switch firmware upgrade is automatically applied, it will take both production and DR offline at the same time.
Another option - if you have a L3 switch in DR and production, you could configure a stub network between the MX in DR and that L3 switch, and the same in production. When you use a stub like this, AutoVPN allows two different sites to advertise a static route to the same subnet. So you could use both the primary and DR MXs in hot/hot mode.
You will need to configure routing and smarts between the primary and standby L3 switches to ensure they direct traffic towards the live MX.
Thanks Phillip,
That's an interesting point about the dual circuits - we had an option for this but declined due to not properly thinking fully about it and cost.
What I'm getting a little confused on is that we have things like Cisco ISE, DUO proxy's, Anyconnect FTD's at both sites in a failover capacity so if we stretch VLANs and trunks across both site are we going to end with routing issues?
Without thinking fully about the cross site HA pairs and all that clever stuff, the real reason was to take backup and replication traffic out of the auto VPN and just shunt that stuff down the LAN Extn.
It's typical though - now we have it it has opened up a whole can of worms as to what we can do.
All our Switches are MS350's so fully routing switches and the LAN extension terminates into a port on each of the routing/gateway/core switches.
I'm almost tempted to say i'm not worried about the MX's for now I just want to be able to move a VM from site A to B with the same IP address.
I'd then like to move backup/rep traffic using Veeam over the LAN extn.