Can someone validate my design idea for new Switch Stacks and implementing OSPF?

Packet7hrower
Here to help

Can someone validate my design idea for new Switch Stacks and implementing OSPF?

I have a newly inherited network that I'm tasked with deploying new Core and Access Switches.


Below is my plan:

Blank-diagram-Page-1-4.png

 

The current "core" switches are MS220's that will all need to be replaced soon due to EoL. Currently all inter-VLAN Routing is handled on the single MX over a lovely sole 1Gbit uplink.

 

Currently, Building B connects directly back to Buidling A via a direct Fiber Run. This is currently Layer 2.

 

Building C connects directly back to Building A via another direct Fiber Run. This site is a bit different, where Building C's Core Switch Stack (MS250's) currently handles all inter-VLAN Routing. All non-local traffic is sent across the Fiber back to Building A.

 

All WAN Circuits are currently at Building A.

 

They will be running a third Direct Fiber path from Building C to Building B. The Fiber was cut last year and they obviously want to mitigate that. This Fiber path will be running opposite of the current path to Building A, and also enter/exit each location from a different side and conduit.

 

My plan is to re-IP Building B onto their own Subnet so I can implement OSFP.

Looking at the diagram, I'll try to preempt some questions you may have, below:

  • At Building A, there are two Fiber WAN Circuits coming in.

    • WAN1 - 1Gbit/1Gbit Fiber

    • WAN2 - 500/50 Cable

  • At Building C, there are plans to have the County ISP provide a third Circuit. This is the only building where the service is available. My plan is to backhaul this WAN Circuit over another direct 10Gbit Fiber to the MX at Building A

  • Building A details regarding Switching Choice:

    • The 4x HCI Server Nodes only have 10Gbit Ethernet. The Top of Rack Switch connects back to the Collapsed Core via 2xCAT6A in LACP. I'm not worried about saturating this link. The current TOR Switch is in a 2Gbit LACP and I'm only seeing 60% peak interface traffic over the last 30 Days. This is why I've decided on the C9300L-24-XUG for the TOR Switch, and the Collapsed Core. I'll need 10Gbit Ethernet to uplink the TOR to the Collapsed Core.

    • I need 3x C9300L-24XUG-4X-M Switches at the Collapsed Core due to the above mentioned 10Gbe requirement, and also the 12x SFP+ Ports. Below are the details:

      • SW1 will have an Uplink to Building B's Core (OSPF), a DAC going to MX1, and will have one leg of an LACP to Access SW1, and a DAC going to MX2.

      • SW2 will have an Uplink to another not-shown Access Switch in Building A, the second leg of the LACP to Access SW1, a DAC going to MX1, and the first leg of the LACP to Access SW2.

      • SW3 will have an Uplink to Buildng C's Core (OSPF), the second leg of the LACP to Access SW2, and the other DAC going to MX2.

      • While this will leave me with only one free SFP+ Slot, I'll have several 10GBe Interfaces I could use to collect any other potential Access Switches that may arise (Though, this is a VERY low possibility)

  • Building B & Building C's Switch Stacks will handle all of their inter-VLAN Routing, and route everything else to the MX at Building A via OSPF.

  • I'll have dual PSUs in all of the C9300's, with dual Eaton 9PX UPS Appliances, split evenly of course. The same goes for each MX at Building A.

I think that about covers it. If I leave anything obvious out, I'll drop an edit in the post.

 

What am I missing?

16 Replies 16
alemabrahao
Kind of a big deal
Kind of a big deal

You know that configuring core switches in a stack is a false sense of redundancy, right?
 
What I mean is, if one day you need to update one of the switch batteries, after the update they will reboot.
 
I personally prefer working with HSRP and vPC.
 
But back to the topic, yes this topology should work.
 
I am not a Cisco Meraki employee. My suggestions are based on documentation of Meraki best practices and day-to-day experience.

Please, if this post was useful, leave your kudos and mark it as solved.

Correct, and I would prefer HSRP, but at this time, the Cat9k in Meraki OS do not currently support HSRP.

dcatiller
Getting noticed

I run a pair of Cat93K-Y switches in our data center core and decided not to put them into the Meraki dashboard. 

I'm still not clear on what I'd lose. 

The design looks good to me.

I have a similar situation, as the MX250 pair we have was doing all routing for the site. 

I was going to use OSPF, but elected not to because there were limitations on the MX.

I don't know if these are still there (It's been a year since I've checked), but I found that the MX would advertise OSPF routes, but not learn them. 

I would recommend combing through the documentation to verify the MX in your plan.

This is the doc that stopped me from using OSPF: Using OSPF to Advertise Remote VPN Subnets - Cisco Meraki Documentation

 

Correct - still, to my knowledge, MX's won't learn the routes.

In my case, OSPF would have been only to redistribute into BGP for my Cat93K pair and an ExpressRoute circuit (requiring BGP).

That's why I bought Cat93K-Y switches. I needed a network core that could participate in BGP.

A Meraki engineer who helped me verify my configuration said native BGP was coming to the MX eventually.

I'm sticking with Cat93K at core and edge for now, so I have access to the protocols, etc I need.

I love Meraki, but I'm running MS switches as access layer devices.

cmr
Kind of a big deal
Kind of a big deal

Are you moving from MS to Catalyst due to a feature requirement, why not use MS switches or at least 9300Ms if you need Catalyst?

All the 9k's will be adopted straight into the Meraki Dashboard.

 

I've heard that Meraki will be deprecating nearly all of the MS line apart from the MS1xx Series in the next 2-3 Years, so I'd rather not go down the route of like a MS355. Plus, this always allows me to failback into IOS firmware if I need to.

cmr
Kind of a big deal
Kind of a big deal

@Packet7hrower I do believe that is the direction that will be taken and we are trialling a pair of C9300-48UXM-Ms at the moment.  We are working with TAC and technical pre-sales to see if we can get them to where we need, they are fine as L2 switches as long as they are in a physically secure location, but we are working on getting the L3 and DHCP to an acceptable state.

cmr
Kind of a big deal
Kind of a big deal

The other thing to think about is that the 9300s have been around since at least 2019, so although Catalyst hardware is the direction of travel, will it be 9300s...?

cmr
Kind of a big deal
Kind of a big deal

Just checked and it was early 2017.  The 3850 predecessor was sold for 9 years, so that would give us two more years of 9300.  This is of course not founded on any actual knowledge of an end date.  The MS355 was launched late 2018, so that hardware could last longer on sale... 

That has always been in the back of my mind. Who knows. But, if I get them now I'll still be guaranteed support through 5-7 years.

PhilipDAth
Kind of a big deal
Kind of a big deal

What is your expectation for uptime?  16 hours per day?  8 hours per day?  24 hours per day?  For example, how easy is it to schedule downtime?

 

Is the system able to be shutdown for 30 minutes 1 to 2 times per year?

 

Is their any IP storage involved that is critical to remain up (such as iSCSI that virtual machines run on)?

Expectation for uptime is 24x7 in Site A.

Sites B & C is 6AM - 6PM.

 

Site A is easily able to be shutdown for a 5-30 Minute maintenance window at anytime, ideally with a 24hr notice.

 

All IP Storage is separated out on dedicated Switches that aren't part of the Data Network.

PhilipDAth
Kind of a big deal
Kind of a big deal

Also, are the data flows to the HCI nodes very big (between the VLANs)?  For example, do you expect there to be heavy 10Gbe flows (such as IP backup traffic) between VLANs, or is it light things like file sharing?

HCI Nodes have 4x 10Gbe NICs. NICs 1,3 are on separated switches for iSCSI traffic that is not connected to the LAN. NICs 2,4 are connected to the TOR Switch. We have 4 Nodes.

 

All large flows, backup traffic specifically, do route across another VLAN. Flow would go:

HCI>TOR Stack>Core Stack>TOR Stack>Backup Appliance

 

That actually is a good point - it maybe worth setting the HCI Nodes and the Backup Appliance's Default Gateway to the TOR Stack, and create a local route on the TOR Stack to reduce hops.

K2_Josh
Building a reputation

Have you found documentation confirming OSPF support on the CS? Or are you planning on running them with IOS-XE?

If the former, which I would assume based on the '-M' models, I would try to clarify two points with Meraki support regarding the OSPF functionality would be the same as what is in the MS OSPF documentation.

The docs note limitations with warm spare (VRRP).

This may be a non-issue, but I couldn't find OSPF docs related to Catalyst switches in Meraki mode. I might also be misunderstanding the design.

Get notified when there are additional replies to this discussion.
Welcome to the Meraki Community!
To start contributing, simply sign in with your Cisco account. If you don't yet have a Cisco account, you can sign up.
Labels