we were considering the MS-390 in a design for a customer but after looking at the release notes (even the beta software) we're not really convinced that this switchis ready for action yet.
This is a list from the relase notes and I say it's quite long:
Is there an ETA when this list is implemented / fixed ?
It's also a bit confusing ; we designed a redundant core solution with a warm spare. The datasheet says the MS-390 supports warm spare but the realse notes say VRRP is not supported yet. Are we talking about VRRP with other brands (Warm spare is based on VRRP with Meraki switches) or is this VRRP in general ?
with kind regards,
Hi, I would say if you don’t feel comfortable with the number of known bugs/issues then hold off or propose another device.
Ive read the 390 data sheets and they specify VRRP is supported whilst the Warm Spare release notes don’t mention the MS-390’s. I would go with what’s in the ms-390 datasheets. The warm spare release notes clearly haven’t been updated.
It depends. If the customer is targeting Q3/4 for implementation and everything is fixed by then there's no problem.
Given the current state of the world and active disruptions that we're all experiencing
I wouldn't place bets on firmware being fixed by Q3/Q4. I am not criticizing Meraki's speed here. I am recognizing that we are not in a usual time or place.
I don't think I would deploy it yet.
But - I'm struggling even for a use case (apart from someone wanting SGT support - which it currently doesn't support ...). Why do you prefer this switch over (for example) an MS355?
The customer needs around 20 SFP/SFP+ ports en 20-40 copper ports in each MER. A stack of 3 MS-390s would be enough otherwise we'd have to make a design with several stacks of MS3XX and MS4XX switches.
With that port density in mind the MS390 with the 8 port 10Gbe module is a nice fit.
I think I would go with a pair of MS425-16's (to get 32 SFP+ ports) and a pair of MS225-24's (to give you 48 copper ports), which gives you two of everything so you have complete redundancy. You can use 10Gbe TwinAx to link the MS225's to the 10Gbe MS425 core. Cheap interconnects.
Plan B, involving just 1 extra switch, would be to go with 5 x MS355-24X. That will give you 20 x SFP+ ports and 120 copper ports. Some customers might prefer to have the single stack. Easy to manage. One "core" switch.
But if it was my money, I'd want the MS425 10Gbe core. I just feels "right".
I've used this exact design (MS425 core and MS225 switch blocks) before with customers and it has worked great. I have had an MS225 server switch block (for those server things still with Gigabit ports) and then MS225 access layer blocks for general connectivity.
Both of these switches have been out for a while and have mature firmware.
Good to hear about the stability. For access switches we went for the 250 because it support redundant power supplies.
We're going back to the drawing board and I'll consider what you're using, thank you.
Hi all, I am just installing 10 * MS390 Switch. I am quit frustrated about them.
First they take for ever to boot up! 10 Minutes minimum!!!
second I am using MS425 Meraki Switch an Core. First big Problem Meraki Swtich do have all ports as Trunk with nativ VLAN 1 and allowed VLANs: all --> MS390 can not Handel allowed VLANs: all they have allowed VLANs: 1-1000 so you will get a VLAN Error on all Ports between true Meraki Switch an MS390.
I started this morning at 08:00 and connected always 2 * MS390 together as a Stack. this evening at 20:00 i got it done to have them all green in the Dashboard 🙂
@MichelRueger the reason they take so long to boot is that I believe they run the standard ios-xe os natively and then load the Meraki management in a container on top.
We use 3850 stacks as our core at 24/7 sites and a stack usually takes 15-20 minutes to reboot when in install mode, otherwise they take up to an hour!
The 390 is effectively a 9300 which is, itself, effectively a 3850+...
Cat 9300/9200 and Cat3800/3600 boot very slow and when stacking is enabled it's even slower. 2960X are slow beasts also when it comes to booting. It seems boottime is no priority at Cisco.... (when things are down you get the urge to beat the switch to boot faster...😁).
I am doing a trial of the MS390. They do take a long time to boot compared to the 355s. Code upgrade also took about an hour! I ran into the 1000 VLAN limit too. That seems like a bizarre limitation and surprising that it's not fixed yet. I am stacking 3 together and that behavior is also different. They all seem to share the same management IP instead of 3 separate management IPs. Although it lets you configure 3, only 1 will show on the Dashboard for each switch. None of this is documented anywhere that I can find. The MS390 initial startup guide makes it sound just like an MS355. I also am trying to figure out why they keep losing their connection to dashboard every 20 minutes or so.
So basically someone left the VTPv1/2 mode at client limiting you to normal range VLANs? 😛
Looks like a bug to me.
Catalyst switches to take some time to boot.
Do a test of the ASIC's
Decompress and load the software.
Check if the software is authentic
Test the ASIC's again..
Then the prompt arrives.
I hope they never run into the issue that sometimes happens that the software fails to load on a stackmember after an update.
@DanZ as mentioned before, they are 9300s with the Meraki software running on top. This is a good result, we've had a reboot take up to an hour if you don't have them in the right mode, it needs to be install on the underlying IOS-XE.
However the power and data stacking is exceptional 😎
Now we go live with 12 * MS390, but it is just a nightmare. The Switch are not correctly show in the Network overview and the Power Supply Infos from the Switch are just not existing.
This is now a long time the Switches are sold and still this Problems ?????????
Just got News on this.
If you upgrade to MS12.22 Beta Software you have one think fixed. The Stack Modul is now Showing correctly.
After MS12.22 update
Power information are still not there but Development is working hard to get it fixed 🙂
I recently sold these to a client and the implementation was a nightmare.
I so regret choosing this model.
Come on Meraki, this is not your standard.
I don't like that the entire stack reboots if one member does it. I have a vSphere cluster aggregated to it.
That beats the purpose of a stack. You would expect redundancy (to a certain level) when you connect servers with double connections to a stack. Splitting the stack will also disable any MLAG functionality.
Meraki should not have released this switch before a) the firmware was stable and b) it has at least the feature level of a MS355/MS425.
This thread is already too long and still the beta firmware (release notes) does not show much progress. since I opened this thread.
We're talking about a hardware platform which is already in production for about 2-3 years. It has a predecessor which is quite similar (Cat3600/3800). You'd say they had more than enough to play around with prototypes. I do not know what is keeping them from 'getting there'.
In the meantime we've made several designs based on the 'normal' MS400/MS300 switches where a proper functioning MS390 certainly would have made a difference.
I really love the idea of having a Cat9K switch with Meraki software don't get me wrong.
I hear you, but why should I have to compromise.
This is no good, I'm seeking a different model replacement from Meraki as recommended by support after we spent 4 hours on a call trying to troubleshoot them.
In the end the switches never completed their boot sequence as they were stuck in initializing phase.
Needless to say the project has to be aborted at 3:30am and reverted to the old switches.
>That beats the purpose of a stack. You would expect redundancy (to a certain level) when you connect servers with double connections to a stack. Splitting the stack will also disable any MLAG functionality.
No Meraki stack from any model MS family allows an individual switch to be rebooted. The same restriction applies with most of the Cisco Enterprise stacks as well. Nexus is one exception, but it's not a stack like the others.
I typically copy the Fibrechannel model, and create two "fabrics". This is especially important if you are using IP storage. Fabrics are independent of each other. You can do this by:
It's just a matter of being aware of the design rules around the platforms you want to use.
>No Meraki stack from any model MS family allows an individual switch to be rebooted. The same restriction applies with most of the Cisco Enterprise stacks as well. Nexus is one exception, but it's not a stack like the others.
Catalyst 3K and 9K have an extra reload option to reload a slot (aka a stackmember). And you can always use the power to shut down a member (not elegant I know).
IMHO the purpose of a stack is redundancy in case of a failure. Having MLAG support is a benefit.
VPC on NX-OS has one big benefit and that if one switch fails at the software level it won’t pull the other switch down the drain. VSS and StackWise Virtual still have that problem just like normal stacks. One drawback is that you have 2 separate switches with separate configs which you need to keep synced manually.
Putting the redundancy at the host level is also an option. It’s a matter of preference I guess.
Meraki user here for close to 3 years now.
Just yesterday we started the staging process for a stack of new MS-390-48UX switches. I thought I had seen most of the Meraki quirks given all the deployments I've done but nope.
First issue was the long (10 mins) reboot process. Being remote, I had to ask the poor local office manager to divine interpret the colors on the cycling front panel LED. More than once I thought the switch had died and did a power cycle boot, which only added to the wait.
Second, I like my switches to have a static IP for mgmt. I boot the new switch with DHCP and then once the switch has stabilized, I would change it to a static. After my change, this took over 20 mins for the change to be shown on the GUI. I'm used to the lead/lag effect but this is insane. As I'm waiting for the GUI to update, my mind of going over all the things that could be wrong.
The restrictions of VLAN 1-1000 on a trunk was another quirk I solved thanks to this thread. I did NOT like seeing that warning message on my Core Meraki MS-416 but didn't realize the trunk port default for VLAN's is "all" on the core.
So now I have to go on to the next stage, which is to create a stack of these things. Let's see how that goes...
If I understand correctly from reading this thread, the stack cables for the MS390 are different than the cables for the MS-425/MS-250 series?
@PeteMoy the stacking cables for the 390s are the same as those for the Cisco IOS-XE 9300s as they are the same hardware, just the MS390 has Meraki software running in a container on it. When you have a stack the reboot process is longer as all members have to come up, but I thought in the latest code they had improved the reboot on upgrade time, so you may be okay.
OK, a quick update as I try to remotely configure the stack of MS-390's. Just as a point of reference, I've remotely configured at least two dozen stacks using MS-250 switches over the last two years so I get the quirks.
- All of the 4 switches had unique static IP's on the MGMT VLAN before I started. Everything was stable and updated to version MS 11.31 I have my local contact hook up the stack cables. The stack was recognized as a "potential" stack and was provisioned.
- A few minutes after doing this, I lost all contact with the stack. I assume there was a reboot process happening. About 30 mins later, the stack appeared in my GUI. I noticed that all 4 switches had the same DHCP IP from another VLAN on my network. Again, I had unique statics assigned to each switch before this process started.
- At this point, I disabled all the individual uplinks to the switches from the core switch. These uplinks were only needed for the staging process. I only need one fiber (or 2 to redundancy) uplink from the stack itself once it's been provisioned. Switch to switch comms will happen over the stack cables. Once I did this, the entire stack disappeared from the GUI again. My local tech had already left the office at this point so there was nothing I could do.
- About 2.5 hours later, the stack magically reappeared in the GUI. This time they all had the same local IP as one of the original switches. But at least this was a static IP on the correct VLAN.
I did a reboot test of a switch that did not have a direct uplink to the core and confirmed that switch with the direct uplink also rebooted by watching the link of the port on the core go out. So the reboot on one in the stack will reset the entire stack.
Individual switches take about 10 mins to reboot. The 4-stack switch took about 20 mins to come back to the GUI.
I can just imagine how much stress it would be if this was a live network and a reset was needed on an individual switch. How about reboots during IOS updates?
Anyway, it's a bit disappointing to see how the MS-390's so far seem to be a step backwards in terms of configuration and installation.
Not sure why my last reply was wiped, but here goes again.
Some more things learned while configuring a new stack of for MS-390's
- I had static IP's defined for each switch in the stack during staging. All the switches of the new stack will take on the IP on one the of switches in the stack once the stacking is complete. Can you recycle the IP's that were originally assigned to each switch? Who knows, but I wouldn't.
- My stack was offline for over 2 hrs after enabling the stack before it showed up again on the GUI
- Rebooting one switch in the stack will boot the entire stack. Single switch reboot, 10 minutes. Four switch stack reboot time? 20 minutes. Keep this in mind for downtime windows during IOS code updates.
Having one 1 IP address is standard with Catalyst Stackwise because the switches share the same control plane. I guess they did not change that.
The boottimes are really bad. It's a bit embarassing that they slap themselves on the shoulder with having one hell of an ASIC on the inside while the switch itself takes ages to boot (resulting in unnecessary long downtime). The Catalyst version is also slow as hell (but that is already mentioned in this thread).
Hello MS390 Guys,
I have the next very Strange thing about that Switch. As you can see in the Pictures if you Stack two Meraki (True Meraki Switches 🙂 ) together they have there own IP See MS450 Core Stack but if you stack MS390 they Lose there preconfigured Statik IP Adress and get one of the Stack Member IP. I didn't figured out witch IP the Stack is selecting but all switches in the Stack have the Same IP. SO there is no consistency how Meraki is Managing Stack IP Addresses. Or better told if you don't use MS390 there is a consistency and if you use the MS390 you loos them 😞
There is also something absolutely strange If you are using Stack with MS390 the Network Topology is changing every 10 Minutes
So Stay tuned for Updates.
@MichelRueger thanks for the updates, it is good to find out how they behave in the real world! As stated above, the MS390 is effectively a Cisco 9300 with the Meraki software running in a container on top. This is why you get the single IP, 10 minute+ reboot etc. and also, to me, explains why there are so many issues with it, as the Meraki software has to go through the underlying IOS to get to the hardware below.
As for the topology issue, it doesn't surprise me as our Cisco 3850 (9300 predecessor) stacks show up in multiple places on the topology diagrams for each of our sites, with the topology logic seemingly unable to understand that they are one device... I'm guessing this issue is causing what you see.
Keep up the good work 😀
So I am about to deploy 3 x MS390 switches (and MR46 APs) and am wondering, based on the feedback here, whether to leave stacking until it is more reliable but who knows when that will be.
I can uplink each switch individually to the core to avoid stacking.
I assume I can still use StackPower without data stacking?
So from those who have stacked MS390 switches, am I better off not using stacking at this time based on your experience?
@Darren8 Stackpower is independent of data stacking on 3850s and 9300s so I'd be pretty confident that it is independent on the MS390s
I plan on moving forward with the stacking due to the lack of fiber uplinks between my stack and the core. Plus our model uses an aggregate uplink connection (combining two trunks) and we can only due that across physical switches using a stack.
The stack behavior is annoying but as long as you understand what to expect, you can be ready for it.
I suggest you leave plenty of lead time to build and test the stack before putting it into production.
@PeteMoy, so what software version would you recommend based on your experience to date?
I'm thinking I should go to MS12.22 before stacking and cutover.
So I am Using the Last Beta Software and it is stable. The good Think with this Beta is you see the Module Interface Status.
Currently running 11.31 but that's because I have core and other switches that are stable on this version. I only keep with the recommended Meraki versions and try not to jump ahead unless there is a specific bug.