Hi all,
I'm from a residential technology integrator and we use Crestron DM-NVX for IP video distribution. I'd like to start use MS switches but Crestron currently state:
"Compatibility issues with Meraki and DM NVX have been reported. Crestron is working with Cisco to identify and address the issues. We are sorry for any inconveniences this may have caused."
Has anyone overcome these issues or know of progress on firmware fixes for this?
Thank you
Are you able to say what those issues are?
Hi Philip, thanks for the quick response,
i haven't seen any issues first hand but want to get ahead of the curve to have confidence in specifying MS's for upcoming projects.
From what i can see on Crestron forums, people are having slow switch times & picture drop-outs which they seem to think is down to IGMP support of the switches.
i'd love to have a bit more info info to give you but thats all i've found on my resarch so far.
Thank you
Is this being routed across VLANs, or IGMP within a VLAN?
In our case 99% of the time the IP video distribution will be on a single VLAN but potentially several switches in a physical stack.
There will always be an MX in place so i can put any inter-vlan firewalls rules we need
I'm in a similar situation where the client requested Meraki to be managed by their IT department, and we have NVX specified. Trying to verify compatibility is a challenge. Crestron informed me that there's a compatibility issue with Meraki with regards to multicast IMGP Fast Leave. Meraki said Fast Leave is on by default. Crestron recommended going through the Cisco configuration guide and compare with Meraki. I did that and all the settings seem to match up. I don't want to order a bunch of Meraki switches if they won't work, yet I can't seem to get an answer on this.
We are in a similar situation. MS switches providing a Layer2 transport for NVX Transmitters and Receivers. Anything more than 1 Transmitter and we lose all video and cannot ping controllers. Any update please from Cisco?
I've always been a huge supporter of Meraki, but I haven't been able to get a DM NVX system working reliably. I have IGMP snooping turned on and flood control on, however once I patch in a decoder and the multicast traffic starts hitting my network it takes it completely down (not just in it's VLAN, but it ramps the CPU of my MX up to 100% and packet loss hits 100%). I did get one encoder and decoder working perfectly for about a week then just out of the blue, it started ramping up the CPU again and haven't gotten it to work since (no network config, firmware, DM NVX firmware or config had been changed).
I took to Reddit to see if there was any solution for Meraki and DM NVX and basically was just met with, it is a known issue and you can't use DM NVX on a Meraki network... which since I'm rocking a Meraki full stack, isn't really an option for me.
Reddit Link: https://www.reddit.com/r/crestron/comments/jcl2p8/dm_nvx_with_meraki_switches/
disable the flood unknown multicast traffic setting
It already is and the issue still occurs.
Then make sure all uplinks are set to trunk and all switch ports are set to access. Exclude the video VLAN from any switch trunk ports which don’t need it.
This post looks like you might have overcome the challenges??
We're got a system going right now. We'll see if it changes. I know it's mixed with everyone.
Just adding a few comments to this thread that may help. Feel free to make comments if you feel that I’ve got something wrong.
I can’t guarantee that this will get the Crestron NVX solution to work every time, but it’s going to give you the best chance of getting it to work.
Thanks Bruce, appreciate the comments.
Nice Bruce, you seem to be the only one sofar that mentions the important fact that the VLAN needs an IGMP querier for IGMP snooping to work properly.
This is another case for keeping user LAN's behind a coreswitch instead of directly terminating it to the MX.
I'm curious if the Meraki switches behave correctly in this regard if you would let your management VLAN layer 2 through the coreswitch onto the MX if multicast packets are not flooded across the L3 interfaces into other VLANs.
Are all of the Meraki switches non-blocking, or only certain ones?
According to the datasheets all switches have a backplane switching capacity equal to the sum of all max portspeeds x2 so that should be non-blocking.
The only scenario I can think of is when you have several stackmembers and you oversubscribe interswitch communication.
The stack is limited to the link's speed. If you have a MS225 your stack is 10GB. That's 10x 1GB ports. That switch has 48x 1GB ports, and 4x 10GB ports for a total of 176 GB throughput, and sure enough that's the switching capacity. Your stack won't exceed the switching capacity because your stack is already accounted for as one of those 10GB ports.
You have to take this into account when designing an NVX system. A design with this switch stacked will limit you to 10 transmitters regardless of the number of ports you have. Unless you are switching two different systems in two different switches and only need some transmitters available in both switches.
@StevenKippel the stacking ports on MS225s are 40Gbps each and there is a ring formed so theoretically 80Gbps of bandwidth between a stacked pair. As each switch has 88Gbps of bandwidth with all ports at full speed then the stack is approximately equal to that.
Meraki spec sheet has "dual 10GB stacking port" so you can theoretically do 20GB if you are using both with only two switches. My point on there was that the network needs to be designed for the application. A lot of people think you can just stack switches and you get a full back plane.
@StevenKippel I'm not sure why the spec sheet would have that, we have stacks of MS225s and they definitely have two 40Gb stacking ports.
>Meraki spec sheet has "dual 10GB stacking port" so you can theoretically
Can you post a link to that (I can't find it). I'll ask to get that information corrected. The stacking ports on an MS225 are 40Gb/s each (a lot of documentation refers to it being 160Gb/s, which is 40Gb/s in both directions on both ports - 4 x 40).
I've requested that info be corrected.