I plan in stacking 5 x 48 port ms350 in 2 cabinets side by side.
There will be 3 vlans on all switches.
Firstly is stacking 5 switches like this recommended.
Secondly does traffic in a single vlan (no switch layer 3 configured) across several switches route in the stack or back to layer 3 device.
Is it OK to use 3m stack cables between cabinets to have 1 stack of 5 switches or should I go with 2 stacks, one with 2 and one with 3 and use a pair of fibres from each stack back to main comms room.
You can stack upto 8 350’s so that’s not an issue. Also having them split between cabinets has its benefits so no issue there also.
Take a read through the below document if you haven’t already.
https://documentation.meraki.com/MS/Stacking/Switch_Stacks
I am not aware of any "recommendation", and if there are some, they are typically for specific scenarios. But I would also consider a different solution, but it depends if it matches your needs:
Probably you want to run your VLAN-routing on this stack. My own preference is to have a pair of switches as the "core" with servers connected and the routing on it. And all the other devices go to an "access" stack that is connected with n*10Gig. Your possibilities to manage ports wherever they are is not limited here as you still have your "virtual stacking available in the complete Meraki Dashboard-network.
Thanks that's what we have.
2 ms425 aggregation switches and 10gb fibre to the stack. No routing in the stack.
I will connect a 10gb to one ms425 from swtich one in the stack and one 10gb to the second ms425 from the last switch in the stack to cover most failures.
The question in I transfer 4tb from port on stack switch 1 to port on stack switch 2 does it Go back to ms425 to route or does it stay between switches in stack.
If it's in the same VLAN, it stays on the stack. If it's from one VLAN to another, it must go through the MS-425-core as it gets routed there.
If you want to be truly nitpicking you should connect the uplinks to the first and third or fourth switch. You need to cross fewer switches in the stack ring to reach the uplink.
Thanks will consider that.
Probably not the issue as before when stack is 160Gb/s bandwidth.
True, that’s why it’s truly nitpicky;p
😂👍
I personally don't like to stack more than 4. No great reason. And you say you have an MS425 aggregation switch.
So I would create a stack in one rack, and another separate stack in the other rack - so each rack is its own stack. It will help reduce the blast radius if something goes wrong.
If the distances are less than 3m to your aggregation switch you could use 3m TwinAx cables to link things together.
The ms425 are in another room about 100m away on 10gb fibre link.
Would you still run a link in stack in each cabinet back to each ms425 ?
Fibre capacity isn't a problem.
Yes, I would. It's a nice clean design.
The suggestion I have from the person that spec's the system is.
5 switches stacked and a 10gb fibre from each switch (ms350) to each of the aggregation swit he's (ms425).
If that good or bad design or over kill.
I'm not sold on it.
I would say it’s overkill.
I agree with @PhilipDAth around minimising the stack size, so a stack of three and a stack of twos the way I’d go if uplink fibres aren’t an issue.
Regarding the number of uplinks, unless you need 30Gbps bandwidth, or the slightly higher availability that three uplinks gives you I wouldn’t do it. I’m pretty sure that giving each switch a link to the core doesn’t reduce the ‘hops’ to the core as the uplink in the LACP bundle that is used is determined by a hash of the source and destination addresses, thus the traffic may hop to another switch to get to the required uplink in the bundle anyway. Just seems like it will cost you a few more SFP+ modules.
LACP does not work well with number of uplinks different than 2, 4 or 8.
So basically you need to figure out depending on the application needs how much the oversubscription ratio should be for every stack.
Normally in Cisco design documentation the normal approach was to have an oversubscription of 20 : 1 from access layer switching to distribution (or in your case collapsed core). This means for every 20x 1GB access port you need at least 1GB uplink.
So if you don't expect much traffic on your network you can round down: So for a 48 port switch, account for 40 ports and you would need 2x 1GB uplink. However stacking changes that so in a stack of two switches you should go for at least 4x 1GB and beyond that start accounting for 10GB uplinks.
So if you can afford two 10GB uplinks per stack you should be able to stack up to the full 8 switches per stack. I'm not sure if the cpu on the lower end switches do nicely with a large stack. I remember in the catalyst world that stack over 5 2960x switches meant a slower responding CLI.
And like Philip and Bruce alluded to: for access switch stacks, keep your stack inside one closet.