Can someone share some insights in how you guys make the firmware updates on switches in datacenters?
With the release of the new staged updates features we are not able to upgrade switch stacks, switch by switch (i know that was not a supported way either).
Cause stacks need to be upgraded at once now, this results in bringing down complete serverrooms. Maybe somebody has a good idea to overcome this.
hi @JohanPlukon - one way i've seen this achieved is to have various switches in the Data Centre in different Networks in the dashboard. This way you can schedule the upgrades in different outage windows.
Nothing that this doesn't apply to individual switches within a stack. They must all be in the same network and will all upgrade simultaneously.
hi @Brash - assume you mean "Noting" above ^. Correct. Any switches in a Stack will need to be in the same Network. But if they've got multiple Stacks then the above should work.
Yes we have multiple stacks in the different datacenter rooms, so yes putting them in a seperate network would be a possability. That was my first option to but i hoped there was a different option 🤣
@JohanPlukon you could also get rid of the stacks or split them in smaller ones and then be able to update the switches/smaller stacks individually.
If the used switches have no dedicated stacking ports (like MS425) then you could change the QSFP ports to classic Ethernet mode and still have the 40Gig connection between the switches.
But then (and also if you split them into 2 different networks) you obviously loose some cross stack LACP benefits and would need to rely on STP...
This very much depends on how you connect the end devices to the switches 😉
@JacekJ thanks for your respons. I just don't know if the loss of this functionality is something that is desired but certainly something to take into consideration.
@JohanPlukon Been scratching my head about this one recently as well 😉
I have a setup with 2 server rooms which are meant to be redundant (so 2 internet lines, 2 servers connected in HA and so on) with 4xMS425 stacked together (2 in each server room).
I was thinking about splitting them so I would have one stack per server room - avoiding issues that come with Meraki stacks with 3 or more members and also staging the core stack upgrade (which handles our L3 and internet) therefore avoiding downtime.
But as you said, there is a lot of things to consider.
😀no worries, and trust me i'm not the Spelling or Grammar police.
If it is a site that can not handle the whole network being ripped out at once, I created "A" and "B" fabrics.
Each fabric is a separate network. You only upgrade one "fabric" at a time.
Obviously, a switch stack can only belong to a single fabric. This may require you to buy an additional switch to be able to maintain the 24x7 operation.
You don't need to create separate networks as the different stacks in the same network can be upgraded at different times. We have sites with three or four stacks and when we do a staged upgrade we create a group for each stack so the upgrades apply to one stack, then another etc. The default is to upgrade one group (stack in our case) each day, but it can be amended to be quicker or slower.