Multicasting Setup Over OSPF

mcbrown
Comes here often

Multicasting Setup Over OSPF

I have a network with 2 locations/buildings:
Main Building (core)
Remote building

The Core/Main Building (425 stack) connects to the remote building (350 stack) via a trunk. OSPF is enabled at both sides and the trunk between the 2 locations only allows the management and transit vlan for OSPF.

I have two servers at the core that will provide multicasting traffic.

 

Server 1 sits on vlan 100 at the core (OSPF enabled)
Server 2 sits on vlan 10 at the core (OSPF enabled)

 

The receivers for the multicasting traffic from Server 1 are on vlan 100 (10.10.100.0/24) at the main building and vlan 100 (10.11.100.0/24) defined locally on the 350s at the remote site.
The receivers for multicasting traffic from Server 2 can be on vlan 10 (10.10.10.0/24) and 15 (10.10.15.0/24) defined on the core at the main building and 10 (10.11.10.0/24) and 15 (10.11.15.0/24) defined on the 350s at the remote site.

 

For multicasting:
On the core, set "Multicast Routing" on vlan 10 and vlan 100 to "Enabled Multicast Routing"
Set a rendezvous point on the core
enable "IGMP snooping" and disable "Flood unknown multicast traffic" on the 425 stack at the core and the 350 stack at the remote site
and at the remote site set "Multicast Routing" to "Enabled Multicast Routing or IGMP querier" on vlan 10 and vlan 100


Am I on the right track here?
Is there a preferred IP/Interface for the rendezvous point?
Is there an advantage to adding rendezvous points on the core for both the svi of vlan 10 and the svi for vlan 100?
Do I need a rendezvous point on the remote site's 350 stack?
Does the transit vlan need multicast routing enabled?

 

Thanks

8 Replies 8
PhilipDAth
Kind of a big deal
Kind of a big deal

GIdenJoe
Kind of a big deal
Kind of a big deal

Hey,

 

So on both switches that share the transit segment you will need to enable Multicast Routing on both their local VLAN's to the servers/clients but also towards the transit VLAN.!!

 

About the placement of the rendez-vous point: since you only have one path between both buildings it is not that relevant however.  If you would have a scenario where you have multiple paths it is important to put the rendez-vous point at a router/switch closest to ALL of your Multicast sources.

 

So if you have multiple subnets living behind separate routers/L3 switches then it is better to move the RP up to a more central point where the path's towards the clients converge.

 

The reasoning for this is because you start with a shared tree where all multicast flows to the RP and then flows from the RP towards the end clients.  Once the traffic is flowing the network figures out the source tree which is the shortest path between server and client.  If this path does not differ or differs only a little from the shared tree then the cutover will be easier.

 

I hope this helps.

mcbrown
Comes here often

Thanks GIdenJoe. 

 

My transit vlan is vlan 20 on the 425s at the main site and vlan 20 on the 350s at the remote site.

 

So in this case I should be doing the following:

CORE:

set "Multicast Routing" to "Enabled Multicast Routing" on Vlans 10,15,20,100

 

REMOTE:

Instead of setting "Multicast Routing" to "Enabled Multicast Routing or IGMP querier", set "Multicast Routing" to "Enabled Multicast Routing" on vlan 10,15,20

 

Basically enable Multicast Routing on every sending and receiving vlan and any transit vlans?  Correct? 

 

Does it make sense to make the Rendezvous Point the transit vlan's SVI IP since this really is central to both switches?

 

Is there a need to set a Rendezvous point at the remote site or only at the core?   

  

 

 

GIdenJoe
Kind of a big deal
Kind of a big deal

So yes on the every VLAN and transit networks used in the multicast because if you don't add them PIM messages will not flow and those interfaces would not be able to be used as outgoing interface for multicast streams.
For the rendez-vous point you will probably want to select the IP address of the core switch in the transit VLAN.  Since you can't create loopbacks on MS switches you have to use an interface address.  I would use the same RP for all the multicast groups so you just fill in any.

mcbrown
Comes here often

Is multicast routing enabled only needed on the sender vlans and transit vlans?  Is it needed on receiver only vlans too? 

 

Got it.  I set it to the transit vlan SVI at the core and at the remote sites I set the rendezvous point using the custom IP option and pointing it to the core transit vlans IP.  

 

GIdenJoe
Kind of a big deal
Kind of a big deal

Any VLAN that either transits, sends or receives multicast streams that traverse IP subnets need PIM SM enabled 😉

mcbrown
Comes here often

I installed all the new switches last night and it doesn't look like multicasting is routing correctly.  I need multicasting for Informacast Fusion which is our PA system and computer imaging but I am most concerned about the PA system right now..

 

My Informacast server and speakers are all on vlan 100 (10.10.100.0/24) at the main building.  There is only one server for the network.  At the remote site my speakers sit on vlan 100 (10.11.100.0/24).

 

I switched some things around with support today.  Here is what I have set right now:

 

On core 425s:

Multicast routing is enabled on vlan 100 and on the transit vlan

My rendezvous point is the SVI of vlan 100 - 10.10.100.1

 

On the 350s:

Multicast routing is enabled on vlan 100 and the transit vlan

The rendezvous point on the 350 network is set to 10.10.100.1 (the svi of vlan 100 on the 425s)

 

The speakers are all connected to a combination of Meraki and Cisco switches that are trunked to the core 425s at the main location and a combo of Cisco and Meraki switches that are trunked to the core 350s at the remote location.   

 

I was working with Meraki support for most of the day and they said:

"Verified that the groups are being formed but all joining different groups. While most speakers are joining group 227.75.76.77, the server is only joining group 239.x.x.253, preventing the devices from communicating."

 

Any ideas what can be causing this?  Is it a routing issue?  

 

GIdenJoe
Kind of a big deal
Kind of a big deal

The server doesn't have to join any group.  You'll have to verify to what group the server is sending the streams.  Just perform a packet capture on the link to the server and limit your capture by following capture filter: net 224.0.0.0/4.

 


So you can identify what streams are actually being transmitted from the server.
Then you'll have to perform capture on several points (transit) and destination network 10.11.100.0/24.  To see if your packets reach it L3 wise.

Finally check if they receive them on the speaker ports.

 

For the receiving devices that are on the same VLAN as your server should work no problem because since you enabled PIM on 10.10.100.0/24 the IGMP querier function should also work.  And even in the odd case it didn't it would flood the multicast to all ports unless you disabled that feature of course.

Get notified when there are additional replies to this discussion.