I have a newly inherited network that I'm tasked with deploying new Core and Access Switches.
Below is my plan:
The current "core" switches are MS220's that will all need to be replaced soon due to EoL. Currently all inter-VLAN Routing is handled on the single MX over a lovely sole 1Gbit uplink.
Currently, Building B connects directly back to Buidling A via a direct Fiber Run. This is currently Layer 2.
Building C connects directly back to Building A via another direct Fiber Run. This site is a bit different, where Building C's Core Switch Stack (MS250's) currently handles all inter-VLAN Routing. All non-local traffic is sent across the Fiber back to Building A.
All WAN Circuits are currently at Building A.
They will be running a third Direct Fiber path from Building C to Building B. The Fiber was cut last year and they obviously want to mitigate that. This Fiber path will be running opposite of the current path to Building A, and also enter/exit each location from a different side and conduit.
My plan is to re-IP Building B onto their own Subnet so I can implement OSFP.
Looking at the diagram, I'll try to preempt some questions you may have, below:
At Building A, there are two Fiber WAN Circuits coming in.
WAN1 - 1Gbit/1Gbit Fiber
WAN2 - 500/50 Cable
At Building C, there are plans to have the County ISP provide a third Circuit. This is the only building where the service is available. My plan is to backhaul this WAN Circuit over another direct 10Gbit Fiber to the MX at Building A
Building A details regarding Switching Choice:
The 4x HCI Server Nodes only have 10Gbit Ethernet. The Top of Rack Switch connects back to the Collapsed Core via 2xCAT6A in LACP. I'm not worried about saturating this link. The current TOR Switch is in a 2Gbit LACP and I'm only seeing 60% peak interface traffic over the last 30 Days. This is why I've decided on the C9300L-24-XUG for the TOR Switch, and the Collapsed Core. I'll need 10Gbit Ethernet to uplink the TOR to the Collapsed Core.
I need 3x C9300L-24XUG-4X-M Switches at the Collapsed Core due to the above mentioned 10Gbe requirement, and also the 12x SFP+ Ports. Below are the details:
SW1 will have an Uplink to Building B's Core (OSPF), a DAC going to MX1, and will have one leg of an LACP to Access SW1, and a DAC going to MX2.
SW2 will have an Uplink to another not-shown Access Switch in Building A, the second leg of the LACP to Access SW1, a DAC going to MX1, and the first leg of the LACP to Access SW2.
SW3 will have an Uplink to Buildng C's Core (OSPF), the second leg of the LACP to Access SW2, and the other DAC going to MX2.
While this will leave me with only one free SFP+ Slot, I'll have several 10GBe Interfaces I could use to collect any other potential Access Switches that may arise (Though, this is a VERY low possibility)
Building B & Building C's Switch Stacks will handle all of their inter-VLAN Routing, and route everything else to the MX at Building A via OSPF.
I'll have dual PSUs in all of the C9300's, with dual Eaton 9PX UPS Appliances, split evenly of course. The same goes for each MX at Building A.
I think that about covers it. If I leave anything obvious out, I'll drop an edit in the post.
What am I missing?