Multicast Basic's

Solved
Ben83
Here to help

Multicast Basic's

Hi,

I have recently installed a new MX Appliance and MS Core switch.

The MS switch is handling all L3 Routing between VLAN's and we have 3 existing C2960 access switches.

Our previous network consultant enabled IGMP snooping on each VLAN of all of the C2960 switches, however, the switches are not detecting a multicast router on the network.

On the new MS core switch, I have not enabled Multicast routing on any of the VLANs.

 

I don't currently have any complaints from our users and am not seeing any issues, however, I am interested to know what the default recommended setup would be.

1. What happens to multicast traffic in this scenario where there is no multicast router?

2. Should I enable multicast routing on each of the VLAN interfaces on the MS Core switch?

3. or simply enable IGMP snooping querier or leave it disabled if there are no issues.

 

Thanks for any advice

 

1 Accepted Solution
m_Andrew
Meraki Employee
Meraki Employee

One clarification to share about enabling an IGMP Querier:

 

Enabling an IGMP querier on an L3 interface by itself does not permit a multicast stream to route between VLANs. For that you need to enable Multicast routing on the involved interfaces, this enables the PIM Sparse protocol.

 

Also, when you do enable multicast routing on an interface, the interface will run an IGMP querier in addition to PIM -- so you don't need to do anything fancy like specifically enabling queriers when using multicast routing.

 

So an open question may remain:

What is the purpose of only enabling an IGMP Querier on an interface, but NOT enabling multicast routing?

 

The answer is that you actually still need to introduce an IGMP querier in the network for IGMP snooping to work correctly across multiple switches. The reason this is needed is due to the following:

 

When endpoint devices wish to receive a multicast stream, they send an IGMP report packet that indicates the IP of the group they want to join. When a switch has IGMP snooping enabled, these IGMP report packets will only get forwarded out the port where an IGMP querier has been learned (the port where IGMP queries come in). If there is no IGMP querier, and thus, no IGMP query packets, the IGMP report packets from endpoints will be dropped.

 

The problem this causes is easiest to understand just by considering a very simple scenario with two switches interconnected by a single link.

 

[_1ST_SWITCH_]<-------------->[_2ND_SWITCH_]

 

 

(1) Imagine there is NO IGMP querier present in the network.

 

(2) Also imagine that the 1st switch has a device connected that is sending a multicast stream to the 239.1.1.1 group -- we can pretend it's a PA system and the group traffic is the audio broadcast.

 

(3) Lastly, imagine both the 1st and 2nd switch have endpoints connected that want to receive this group, so they send IGMP reports to join 239.1.1.1. These devices may be the PA loudspeakers that plays the audio.

 

You may already see what the problem is. The 1st switch is receiving the incoming audio stream. And on the 1st switch, it has a loudspeaker connected directly that has expressed to receive the audio via an IGMP report. As a result, the 1st switch will only send the audio out the port with the connected loudspeaker.

 

Since the 2nd switch did not forward the IGMP report from its own receiver toward the 1st switch (no querier is present), the 1st switch has no idea that there is ALSO a receiver on the port leading to the 2nd switch. So that loudspeaker won't get the audio.

 

If we introduce an IGMP querier on the 1st switch, the problem is solved. When the 2nd switch begins to receive IGMP queries from the 1st switch, it will now forward the IGMP reports from its own receiver out the port where the queries come in. So the IGMP reports from the 2nd switch will now reach the 1st switch, allowing it to properly send the audio stream out both of the necessary ports.

 

A final question may be raised:
What happens if the IGMP querier is moved to the 2nd switch, but the 1st switch still has the audio sender connected? Turns out we are still OK here.

 

In this case, the 1st switch will again stop receiving IGMP reports from the 2nd switch, because the 2nd switch does not see any incoming IGMP queries from the 1st switch. Although the 1st switch no longer receives IGMP reports from the 2nd switch, it DOES receive IGMP queries from the 2nd switch. With IGMP snooping:

Multicast data streams will also get sent out of the ports where IGMP queries come in.

 

This is true even if that port never has any incoming IGMP reports to join the group. Basically all multicast streams flow towards the querier.

 

This also means that to have a best optimized network, it is a good practice to place your querier as close to the source of your multicast traffic as possible. Since multicast traffic always flows to the querier, if the source of the traffic and the querier are at opposite ends of the network, all the multicast traffic will traverse the whole network 100% of the time, irrespective of whether receives exist.

View solution in original post

26 Replies 26
Adam
Kind of a big deal

I believe default behavior will limit multicast traffic to its VLAN.  If you need it to go between VLANs you'll probably have to start enabling some of those other features. 

Adam R MS | CISSP, CISM, VCP, MCITP, CCNP, ITILv3, CMNO
If this was helpful click the Kudo button below
If my reply solved your issue, please mark it as a solution.
Ben83
Here to help

Cool, I guess I'll leave it disabled for now.

I do plan to create a VLAN for Airplay devices, printers, Apple TV's etc.

Is Multicast routing required to have these accessible in multiple VLAN's?  or does Meraki have a separate solution for this?

PhilipDAth
Kind of a big deal
Kind of a big deal

Without IGMP snooping multicast traffic is flooded out all switch ports in the VLAN.

 

"IGMP Snooping" is basically used within a VLAN to control which ports get which multicast streams.

"IGMP Snooping Querier" or "IGMP Querier" allows for multicast traffic to be routed between layer 3 VLANs.

 

https://documentation.meraki.com/zGeneral_Administration/Other_Topics/Multicast_support

Ben83
Here to help

ok, makes sense.  IGMP Snooping is enabled by default on the Meraki Core switch.

Multicast Routing and IGMP Querier are optional and required for routing multicast traffic between VLANS.

 

m_Andrew
Meraki Employee
Meraki Employee

One clarification to share about enabling an IGMP Querier:

 

Enabling an IGMP querier on an L3 interface by itself does not permit a multicast stream to route between VLANs. For that you need to enable Multicast routing on the involved interfaces, this enables the PIM Sparse protocol.

 

Also, when you do enable multicast routing on an interface, the interface will run an IGMP querier in addition to PIM -- so you don't need to do anything fancy like specifically enabling queriers when using multicast routing.

 

So an open question may remain:

What is the purpose of only enabling an IGMP Querier on an interface, but NOT enabling multicast routing?

 

The answer is that you actually still need to introduce an IGMP querier in the network for IGMP snooping to work correctly across multiple switches. The reason this is needed is due to the following:

 

When endpoint devices wish to receive a multicast stream, they send an IGMP report packet that indicates the IP of the group they want to join. When a switch has IGMP snooping enabled, these IGMP report packets will only get forwarded out the port where an IGMP querier has been learned (the port where IGMP queries come in). If there is no IGMP querier, and thus, no IGMP query packets, the IGMP report packets from endpoints will be dropped.

 

The problem this causes is easiest to understand just by considering a very simple scenario with two switches interconnected by a single link.

 

[_1ST_SWITCH_]<-------------->[_2ND_SWITCH_]

 

 

(1) Imagine there is NO IGMP querier present in the network.

 

(2) Also imagine that the 1st switch has a device connected that is sending a multicast stream to the 239.1.1.1 group -- we can pretend it's a PA system and the group traffic is the audio broadcast.

 

(3) Lastly, imagine both the 1st and 2nd switch have endpoints connected that want to receive this group, so they send IGMP reports to join 239.1.1.1. These devices may be the PA loudspeakers that plays the audio.

 

You may already see what the problem is. The 1st switch is receiving the incoming audio stream. And on the 1st switch, it has a loudspeaker connected directly that has expressed to receive the audio via an IGMP report. As a result, the 1st switch will only send the audio out the port with the connected loudspeaker.

 

Since the 2nd switch did not forward the IGMP report from its own receiver toward the 1st switch (no querier is present), the 1st switch has no idea that there is ALSO a receiver on the port leading to the 2nd switch. So that loudspeaker won't get the audio.

 

If we introduce an IGMP querier on the 1st switch, the problem is solved. When the 2nd switch begins to receive IGMP queries from the 1st switch, it will now forward the IGMP reports from its own receiver out the port where the queries come in. So the IGMP reports from the 2nd switch will now reach the 1st switch, allowing it to properly send the audio stream out both of the necessary ports.

 

A final question may be raised:
What happens if the IGMP querier is moved to the 2nd switch, but the 1st switch still has the audio sender connected? Turns out we are still OK here.

 

In this case, the 1st switch will again stop receiving IGMP reports from the 2nd switch, because the 2nd switch does not see any incoming IGMP queries from the 1st switch. Although the 1st switch no longer receives IGMP reports from the 2nd switch, it DOES receive IGMP queries from the 2nd switch. With IGMP snooping:

Multicast data streams will also get sent out of the ports where IGMP queries come in.

 

This is true even if that port never has any incoming IGMP reports to join the group. Basically all multicast streams flow towards the querier.

 

This also means that to have a best optimized network, it is a good practice to place your querier as close to the source of your multicast traffic as possible. Since multicast traffic always flows to the querier, if the source of the traffic and the querier are at opposite ends of the network, all the multicast traffic will traverse the whole network 100% of the time, irrespective of whether receives exist.

PhilipDAth
Kind of a big deal
Kind of a big deal

@m_Andrew what an excellent answer!  It deserves more kudo's.

AlexObeso
Conversationalist

Agree 100%


@PhilipDAth wrote:

@m_Andrew what an excellent answer!  It deserves more kudo's.


 

tmartin
New here

So i'm in a similar scenario to what you explained, except with a lighting system.

 

We have 4 switches (MS120 and MS350) that have a Lighting Network (Streaming ACN/E1.31) connected across them (VLAN 111) The infrastructure is laid out like this:

 

{Lighting Controller/mcast tx}<-->[Switch 1/IGMP Querier]<--->[Switch 2]<--->[Switch 3]<-Fibre->[Switch 4]<--->{Lighting Device/mcast rx}

 

We created an IGMP Querier on the switch closest to the multicast source, and we can see the multicast traffic on switch 1 and 2, but not switch 3 or 4. We do not have multicast routing enabled for the VLAN, just the IGMP Querier. All the switches have IGMP snooping enabled, and Unknown Multicast Flood turned off. We only have VLAN 111 enabled on the switches that are part of the signal path. Do we need to enable multicast routing for the VLAN across the whole network, or just the 4 involved switches? We don't need to route the multicast traffic across VLAN's.

PhilipDAth
Kind of a big deal
Kind of a big deal

You should not need to enable multicast routing.

 

Is the Lighting system connected to the MS350 by chance?  If so, you should just need to create a layer 3 interface on that switch, with just the IGMP querier turned on.

 

I don't think the MS120's will support creating a layer 3 interface.

tmartin
New here

It is, and we've created the L3 IGMP Querier on the switch. We were a bit confused about the IP settings required, the lighting network is all static IP's in a 10.101 scheme, so we created the Interface IP at 10.101.1.1, and the subnet of 10.101.0.0/16.

 

The lighting network is VLAN 111, and all the ports on Switch 1 are set to Access 111. Switch 2 has the Uplink to Switch 1, and Switch 2 is uplinked to Switch 3 which has a fibre connection to Switch 4.

PhilipDAth
Kind of a big deal
Kind of a big deal

That subnet is fine.

 

Have you configured the link between the switches as trunk ports?

tmartin
New here

Yes. All switch links are trunks, with access to vlan 111

m_Andrew
Meraki Employee
Meraki Employee

That would appear to be a fairly straightforward configuration. If the multicast sender and multicast receiver are both in VLAN 111, and you have also enabled an IGMP querier in VLAN 111, you should be good to go provided the receiver properly sends an IGMP report expressing the desire to join the group. Since the sender and receiver are both in the same L2 domain (VLAN 111) there is no need to use multicast routing.

 

The way this would normally work is that as soon as Switch 2 gets the IGMP report packet on the port leading to Switch 3, switch 2 will begin passing stream out that port (and likewise for Switch 3 to pass over to Switch 4).

 

If you've not done so already, I would recommend upgrading to the current release candidate firmware (10.40 as of this post). Particularly if you're on any MS 9.x, or older 10.x firmware, there are a variety of bug fixes related to IGMP snooping.

 

If this does not work, you may want open a case with Support to have it looked at more closely.

Ahmed83
Here to help

Hello Gents,

 

I have a setup having 2 core switches (MS425-32) stacked operating at L3 and 2 server farm switches (MS350-48) stacked operating at L2. IPTV system components including Cisco DCM and receivers all physically connected to server farm switches which is then connected to the 2 core switches. all L3 interfaces are created on the core stack. we have 1 subnet/vlan for DCM input and receivers output interfaces, 1 subnet/vlan for DCM output, and 2 different subnets/vlans for set top boxes and TVs. my question is how the multicast should be configured on Meraki according to the best practice including:

- Which interfaces should be configured with multicast enabled and which should be configured with querier?

- Is it okay to have the interfaces created on the core while the multicast sources are all physically connected to the server farm stack? (core is all fiber interfaces and can't host the UTP connections).

- How the randevous point interface should be configured? and should it be a separate interface or it's better to choose one for example the STBs subnet interface as the RP?

- What's the best practices to have smooth multicast flowing through the network? - I have an issue that the channel stream becomes unavailable after a while (around 1 hour) then if we changed the channels the stream comes back normally then after a while goes again and so on, what could be the probable reason for that?

 

@m_Andrew @PhilipDAth 

PhilipDAth
Kind of a big deal
Kind of a big deal

Becase all layer 3 interfaces are located in the MS425 core stack, I think you should enable multicast routing on each of those layer 3 interfaces.

 

The downstream switches should be able to do IGMP snooping and use the core layer 3 interfaces for sending queries too.

 

Generally you want the source of the multicast streams to be as close to the core as possible.  I think your design should be fine.

m_Andrew
Meraki Employee
Meraki Employee

You will want to make sure there is an IGMP Querier active in each VLAN where multicast flows will be passing through. When an L3 interface is set to enable "Multicast Routing", this interface will also run an IGMP querier, so for VLANs where multicast routing-enabled L3 interfaces exist, there is no need to additionally configure a separate querier interface.

 

It should be fine if the L3 interfaces are on the L3-enabled core stack but the multicast stream sources are attached to the downstream L2 access stack -- as long as there is L2 visibility.

 

If you are only doing local inter-VLAN routing for the multicast streams (between VLANs on the core stack) the configuration of the rendezvous point is not as important, but as a good practice, it would be most efficient to assign the RP to the L3 interface which is in the same VLAN as the sources.

 

When a switch running multicast routing begins to receive a new multicast stream, it will bundle the packets of the stream into an encapsulated unicast packet stream, transmitted directly to the configured rendezvous point. This is to ensure the rendezvous point always knows about the different source streams that exist in the network, no matter where they are. If the native streams are directly received in the VLAN with the rendezvous point from the get go, this unicast-encapsulated transmission to a remote rendezvous point is avoided.

 

Additionally, it would be highly recommended that your network is configured to have IGMP Snooping enabled, and to disable unknown multicast flooding. Both can be set from the Switch --> Switch settings page in Dashboard.

 

If you've never touched this, the default settings have IGMP Snooping enabled, but unknown multicast flooding is not disabled by default, so it would be a good step to address this.

 

Finally, it would also be good to use the current MS 11.x firmware if you are not already, as there are several improvements and issues that have been addressed with respect to multicast routing.

Ahmed83
Here to help

Hi Andrew,

 

Thanks for all the valuable info you have provided, generally my setup is fitting into all what you have mentioned except for the RP, which i have changed as per your recommendation.

 

two points here, first my MS switches SW is MS 10.45 and is shown as up to date on the dashboard so if 11.x has been released, shouldn't it give me an update available status ?

 

secondly, I think the multicast is acting oddly on my network as there is connectivity disconnections happens when any changes made to multicast settings even on other VLANs/subnets which doesn't have the multicast routing enabled?, also when i go to core switch> Tools> Multicast routing and run this tool which i think should show me something like the multicast routing table it gives me nothing!!??

@m_Andrew 

GreenMan
Meraki Employee
Meraki Employee

11.x will become available if you enable Try beta firmware = yes, under Network-wide > General > Firmware upgrades. Have a look under Organization > Monitor > Firmware upgrades for release notes. I'd try not to be too worried by the 'Beta' label, in this instance - I'm pretty sure Support would agree with the recommendation for using 11.x, with your requirement. Support also have access to far more info on Multicast status, if you need to troubleshoot, so don't be afraid of raising a case with them.
IsaacG
Conversationalist

Thanks so much for your detailed explanation. Helped me understand how to design our network for QSYS Audio/Video install. One problem I ran into that I couldn't figure out is that IGMP querier did not make it past an aggregated link between Merakis. Traffic passed to the other trunked aggregate port of switch but could not make it to the devices needing to discover or receive. I had to split the LACP port aggregation to get it to work. Had been banging my head against this for hours. I will be reporting this to Meraki support. We really need to port aggregation to work so its an issue that we cannot discover devices with it.

 

-Isaac

stefantautscher
Comes here often

Hi,

 

seems, that we are running in the same issue.

Have you opened a TAC case?

What was the outcome?

 

BR Stefan

DR1
Getting noticed

@IsaacGWas this problem resolved to your knowledge?

 

We're now running into the same problem. Its incredibly problematic when considering the significant bandwidth requirements of something like NDI.

GBaumann
New here

Hi,

I've observed something similar, in the deployment of a Dante audio network, as audio specialist, we got mixed answers from the IT company that deployed the Merakis, they even analysed the problem with Cisco support. No fact based answer. The information that came out was a problem with multicast tables synchronisation in a stack. No idea if it is an officially reviewed bad config or a bug, but destacking and setting LAG off solved the problems. Only took 11 months to solve :). Thanks for posting @IsaacG as your post was the trigger for a review of the system (the possibility that switching  was the problem).

IsaacG
Conversationalist

Unfortunately months passed and they promised it would be resolved in an update but I never found that to be the case and I let the support case slip away. They did acknowledge the issue but we had to go with no LACP aggregation since the QSYS core could never multicast discover any of the devices. We had other issues with Meraki surrounding the AV system such as needing to force  port speed on the QSYS I/O flex devices to 1Gbps along with needing to set port speed on the QSYS TSCs to 100Mbps...That could have been the devices fault though

bmstark
Comes here often

I have a mixture of MS225 and MS125s with a MS425 core switch. For my AV VLAN, should I just create one interface for Multicast / IGMP snooping querier or does each switch need an Interface? Here's what my Routing & DHCP screen looks like:

Screenshot 2023-09-21 at 2.42.08 PM.png

cmr
Kind of a big deal
Kind of a big deal

From what I understand, you should only need an interface on the switch closest to the video source, though we actually just put them on the core interfaces...

If my answer solves your problem please click Accept as Solution so others can benefit from it.
IsaacG
Conversationalist

You only need one of them configured on the switch were your core AV device is that discovers all the A/V devices on the network(and put the querier on the correct vlan). You also only need the IGMP querier if they are on separate switches. We had a Shure vendor tech support try and tell us that the Mics didn't support vlans but that was just them trying to blame our network for audio streams making it to the core device but getting silently ignored(when they had been working for years before without issue).

Get notified when there are additional replies to this discussion.
Welcome to the Meraki Community!
To start contributing, simply sign in with your Cisco account. If you don't yet have a Cisco account, you can sign up.
Labels