This is a bit of a rabbit trail from the OP, hopefully not too far off topic. @Ryan_Miles I've been following your slide decks that you provided above (thanks for sharing those by the way) for a new MX HA cluster setup I'm building out with two ISP connections. I appreciate you simplifying a somewhat complex setup. I'm using two 8 port ms120's for the WAN breakout switches going to two MX85's and behind those are two 24 port MS120's. I bought all of this thinking I already knew how to get it going, not realizing how much detailed configuration was involved. It's been a good learning experience. I'm writing because I've got a loop of some kind that I have not resolved yet. I figured out the wan breakout config, that and the firewalls seem to be working fine. Where I'm getting stuck is on the switch side, setting up both 120's behind the MX's. I'm using the recommended layout from MX Warm Spare guide. So a link from each MX to each switch for four total and then one link between the two switches. In your WAN and LAN failure scenarios slide deck you say all the interswitch and switch to MX connections should be 'trunks, allow all VLAN's, and a native VLAN'. Here are a few things that are confusing me about this. 1. What should the spanning tree settings (and by that I mean RSTP, STP guard, and UDLD) be for the trunk ports and are the ones to the MX's different at all from the inter switch link? 2. Also, you said on the MX end of these uplink ports there are no settings to change, that it's all handled by the L2 of the switch, but I'm reading over on this post (https://community.meraki.com/t5/Switching/STP-guard-setup-best-practices/m-p/31165) about all the ST stuff and turning off Drop Untagged Traffic on the MX to avoid loops and it seems like I need to have some of those settings on there. I can't tell if I'm just not grasping the details here or if I'm convoluting two unrelated things. Thanks again Ryan for the slide decks and I'd be happy to hear from anyone on how these links should be configured to avoid loops. Thanks All (in advance)
... View more