Crestron DM-NVX Video Distribution

Johnfitz999
Here to help

Crestron DM-NVX Video Distribution

Hi all,

 

I'm from a residential technology integrator and we use Crestron DM-NVX for IP video distribution. I'd like to start use MS switches but Crestron currently state:

 

"Compatibility issues with Meraki and DM NVX have been reported. Crestron is working with Cisco to identify and address the issues. We are sorry for any inconveniences this may have caused."

 

Has anyone overcome these issues or know of progress on firmware fixes for this?

 

Thank you

25 REPLIES 25
PhilipDAth
Kind of a big deal
Kind of a big deal

Are you able to say what those issues are?

Hi Philip, thanks for the quick response,

 

i haven't seen any issues first hand but want to get ahead of the curve to have confidence in specifying MS's for upcoming projects.

 

From what i can see on Crestron forums, people are having slow switch times & picture drop-outs which they seem to think is down to IGMP support of the switches.

 

i'd love to have a bit more info info to give you but thats all i've found on my resarch so far.

 

Thank you

Is this being routed across VLANs, or IGMP within a VLAN?

In our case 99% of the time the IP video distribution will be on a single VLAN but potentially several switches in a physical stack.

 

There will always be an MX in place so i can put any inter-vlan firewalls rules we need

I'm in a similar situation where the client requested Meraki to be managed by their IT department, and we have NVX specified. Trying to verify compatibility is a challenge. Crestron informed me that there's a compatibility issue with Meraki with regards to multicast IMGP Fast Leave. Meraki said Fast Leave is on by default. Crestron recommended going through the Cisco configuration guide and compare with Meraki. I did that and all the settings seem to match up. I don't want to order a bunch of Meraki switches if they won't work, yet I can't seem to get an answer on this.

We are in a similar situation. MS switches providing a Layer2 transport for NVX Transmitters and Receivers. Anything more than 1 Transmitter and we lose all video and cannot ping controllers. Any update please from Cisco?

JHenwood
Conversationalist

I've always been a huge supporter of Meraki, but I haven't been able to get a DM NVX system working reliably. I have IGMP snooping turned on and flood control on, however once I patch in a decoder and the multicast traffic starts hitting my network it takes it completely down (not just in it's VLAN, but it ramps the CPU of my MX up to 100% and packet loss hits 100%). I did get one encoder and decoder working perfectly for about a week then just out of the blue, it started ramping up the CPU again and haven't gotten it to work since (no network config, firmware, DM NVX firmware or config had been changed).

 

I took to Reddit to see if there was any solution for Meraki and DM NVX and basically was just met with, it is a known issue and you can't use DM NVX on a Meraki network... which since I'm rocking a Meraki full stack, isn't really an option for me.

 

Reddit Link: https://www.reddit.com/r/crestron/comments/jcl2p8/dm_nvx_with_meraki_switches/

disable the flood unknown multicast traffic setting

It already is and the issue still occurs.

Then make sure all uplinks are set to trunk and all switch ports are set to access. Exclude the video VLAN from any switch trunk ports which don’t need it.

This post looks like you might have overcome the challenges??

We're got a system going right now. We'll see if it changes. I know it's mixed with everyone.

Hi @StevenKippel , did you manage to get the system stable and working?

Bruce
Kind of a big deal

Just adding a few comments to this thread that may help. Feel free to make comments if you feel that I’ve got something wrong.

 

  • You need to follow Crestron’s recommendations and requirements for the network - not only because they won’t support you otherwise, but because NVX is terribly bandwidth hungry (and that’s a whole other conversation)
  • For every encoder on the network you need to ensure there is at least 1Gbps of uplink bandwidth for wherever that traffic needs to go - especially to your RVP if you’re using PIM and multicast routing. So if you have 20 encoders downstream of a core then you’ll need at least 20Gbps of uplink to the core from these. Yes, for a large NVX deployment it can end up being crazy numbers, but that’s what you need.
  • When you consider the above, avoid putting the stream through any MX. The MX devices just replicate multicast traffic to all other ports in the same broadcast domain (there is no IGMP smarts) - my guess, although one of the Meraki team will need to confirm, is that this is done in software. With this replication and the bandwidth of the stream that the NVX encoders produce you’ll likely overwhelm almost any MX to the point of complete meltdown.
  • From what I’ve seen the encoders produce a continual multicast stream whether or not anything is listening to it. So the minute any encoder boots expect to see a huge amount of multicast date on the network. Admittedly this may just be poor configuration of encoders for what I’ve seen, but be aware. (And obviously make sure broadcast/storm controls are set appropriately).
  • Never flood multicast - even if you have only a few NVX encoders on your network and you flood multicast traffic your network will meltdown. The amount of traffic produced by a few of these encoders will easily overwhelm a 1Gbps port if not controlled with IGMP. So you will need IGMP snooping and an IGMP querier on the Layer 3 interface, and PIM-SM/multicast routing if you expect to go across VLANs (i.e. a MX with a Layer 3 interface isn’t going to cut the mustard).
  • As others have said, if the NVX traffic doesn’t need to go somewhere make sure that you prune those VLANs from any trunks that you have. In this regard it’s critical that you understand your STP topology and where that multicast traffic is going.
  • Ironically the MS390 switches appear to do a better job with multicast and IGMP (especially at this scale) than the traditional Meraki switches - it’s just a shame that there are currently so many other limitations with them.

I can’t guarantee that this will get the Crestron NVX solution to work every time, but it’s going to give you the best chance of getting it to work.

Spectre
Meraki Employee
Meraki Employee

Thanks Bruce, appreciate the comments.

GIdenJoe
Kind of a big deal
Kind of a big deal

Nice Bruce, you seem to be the only one sofar that mentions the important fact that the VLAN needs an IGMP querier for IGMP snooping to work properly.
This is another case for keeping user LAN's behind a coreswitch instead of directly terminating it to the MX.

 

I'm curious if the Meraki switches behave correctly in this regard if you would let your management VLAN layer 2 through the coreswitch onto the MX if multicast packets are not flooded across the L3 interfaces into other VLANs.

Aaron_Wilson
A model citizen

Are all of the Meraki switches non-blocking, or only certain ones?

According to the datasheets all switches have a backplane switching capacity equal to the sum of all max portspeeds x2 so that should be non-blocking.

The only scenario I can think of is when you have several stackmembers and you oversubscribe interswitch communication.

The stack is limited to the link's speed. If you have a MS225 your stack is 10GB. That's 10x 1GB ports. That switch has 48x 1GB ports, and 4x 10GB ports for a total of 176 GB throughput, and sure enough that's the switching capacity. Your stack won't exceed the switching capacity because your stack is already accounted for as one of those 10GB ports.

You have to take this into account when designing an NVX system. A design with this switch stacked will limit you to 10 transmitters regardless of the number of ports you have. Unless you are switching two different systems in two different switches and only need some transmitters available in both switches.

cmr
Kind of a big deal
Kind of a big deal

@StevenKippel the stacking ports on MS225s are 40Gbps each and there is a ring formed so theoretically 80Gbps of bandwidth between a stacked pair.  As each switch has 88Gbps of bandwidth with all ports at full speed then the stack is approximately equal to that.

Meraki spec sheet has "dual 10GB stacking port" so you can theoretically do 20GB if you are using both with only two switches. My point on there was that the network needs to be designed for the application. A lot of people think you can just stack switches and you get a full back plane.

cmr
Kind of a big deal
Kind of a big deal

@StevenKippel I'm not sure why the spec sheet would have that, we have stacks of MS225s and they definitely have two 40Gb stacking ports.

>Meraki spec sheet has "dual 10GB stacking port" so you can theoretically

 

Can you post a link to that (I can't find it).  I'll ask to get that information corrected.  The stacking ports on an MS225 are 40Gb/s each (a lot of documentation refers to it being 160Gb/s, which is 40Gb/s in both directions on both ports - 4 x 40).

I've requested that info be corrected.

Get notified when there are additional replies to this discussion.
Welcome to the Meraki Community!
To start contributing, simply sign in with your Cisco account. If you don't yet have a Cisco account, you can sign up.
Labels