Community Record
23
Posts
85
Kudos
2
Solutions
Badges
Aug 9 2019
5:24 PM
1 Kudo
Hello, As mentioned you will get a more rich experience and fully featured SD-WAN using an MX. However, if all you’ve got is an MS250, I will note one feature you might be able to leverage, which would be ECMP routing. If both upstream gateways can run OSPF and if they both can advertise a default external route (0.0.0.0/0) to the MS250 with the same cost, the MS250 will load balance traffic to both gateways using ECMP. It is purely active/active and won’t provide as much flexibility as an MX (nor will it do NAT), but it is a possible configuration to use in a pinch.
... View more
I would like to clarify the distinction between the STP event and the link event to remove any confusion. When a switch port experiences a loss of PHY (link goes down), it is at this point that STP also "disables" the port. In other words, the meaning of these events is not that the port is being disabled by STP. The meaning is that as a result of the port going down, the STP state changes to disabled. Even if the event log ordering may be reversed, this is still the case. This may not explain why a port goes down, only to clarify that it's not going down because of any STP state -- the STP change is itself a result of the loss of link status. This does not explain all cases but also be aware many devices have various power save and optimizations present on the Ethernet side and may result in downgrade or link speed or bringing a link down for periods of time.
... View more
Mar 19 2019
1:40 PM
4 Kudos
You will want to make sure there is an IGMP Querier active in each VLAN where multicast flows will be passing through. When an L3 interface is set to enable "Multicast Routing", this interface will also run an IGMP querier, so for VLANs where multicast routing-enabled L3 interfaces exist, there is no need to additionally configure a separate querier interface. It should be fine if the L3 interfaces are on the L3-enabled core stack but the multicast stream sources are attached to the downstream L2 access stack -- as long as there is L2 visibility. If you are only doing local inter-VLAN routing for the multicast streams (between VLANs on the core stack) the configuration of the rendezvous point is not as important, but as a good practice, it would be most efficient to assign the RP to the L3 interface which is in the same VLAN as the sources. When a switch running multicast routing begins to receive a new multicast stream, it will bundle the packets of the stream into an encapsulated unicast packet stream, transmitted directly to the configured rendezvous point. This is to ensure the rendezvous point always knows about the different source streams that exist in the network, no matter where they are. If the native streams are directly received in the VLAN with the rendezvous point from the get go, this unicast-encapsulated transmission to a remote rendezvous point is avoided. Additionally, it would be highly recommended that your network is configured to have IGMP Snooping enabled, and to disable unknown multicast flooding. Both can be set from the Switch --> Switch settings page in Dashboard. If you've never touched this, the default settings have IGMP Snooping enabled, but unknown multicast flooding is not disabled by default, so it would be a good step to address this. Finally, it would also be good to use the current MS 11.x firmware if you are not already, as there are several improvements and issues that have been addressed with respect to multicast routing.
... View more
This is not possible. However, if you have two MS350s configured either using warm spare (VRRP) or in a switch stack, DHCP leases will be synced among them for failover purposes if you run a DHCP server.
... View more
Nov 27 2018
3:25 PM
Hello, You are correct. A switch treats a received multicast packet as unknown when the destination group is one for which the switch has not observed an explicit group join (signaled via an IGMP report packet). If flood-unknown is enabled, such packets will get forwarded out all ports (except the ingress port). If flood-unknown is disabled, then the packet would only get forwarded out the ports where an IGMP querier or MRD advertiser is known. If none are known, the packet will be discarded by the switch.
... View more
Oct 21 2018
6:11 PM
1 Kudo
That would appear to be a fairly straightforward configuration. If the multicast sender and multicast receiver are both in VLAN 111, and you have also enabled an IGMP querier in VLAN 111, you should be good to go provided the receiver properly sends an IGMP report expressing the desire to join the group. Since the sender and receiver are both in the same L2 domain (VLAN 111) there is no need to use multicast routing. The way this would normally work is that as soon as Switch 2 gets the IGMP report packet on the port leading to Switch 3, switch 2 will begin passing stream out that port (and likewise for Switch 3 to pass over to Switch 4). If you've not done so already, I would recommend upgrading to the current release candidate firmware (10.40 as of this post). Particularly if you're on any MS 9.x, or older 10.x firmware, there are a variety of bug fixes related to IGMP snooping. If this does not work, you may want open a case with Support to have it looked at more closely.
... View more
Aug 17 2018
9:54 AM
47 Kudos
One clarification to share about enabling an IGMP Querier: Enabling an IGMP querier on an L3 interface by itself does not permit a multicast stream to route between VLANs. For that you need to enable Multicast routing on the involved interfaces, this enables the PIM Sparse protocol. Also, when you do enable multicast routing on an interface, the interface will run an IGMP querier in addition to PIM -- so you don't need to do anything fancy like specifically enabling queriers when using multicast routing. So an open question may remain: What is the purpose of only enabling an IGMP Querier on an interface, but NOT enabling multicast routing? The answer is that you actually still need to introduce an IGMP querier in the network for IGMP snooping to work correctly across multiple switches. The reason this is needed is due to the following: When endpoint devices wish to receive a multicast stream, they send an IGMP report packet that indicates the IP of the group they want to join. When a switch has IGMP snooping enabled, these IGMP report packets will only get forwarded out the port where an IGMP querier has been learned (the port where IGMP queries come in). If there is no IGMP querier, and thus, no IGMP query packets, the IGMP report packets from endpoints will be dropped. The problem this causes is easiest to understand just by considering a very simple scenario with two switches interconnected by a single link. [_1ST_SWITCH_]<-------------->[_2ND_SWITCH_] (1) Imagine there is NO IGMP querier present in the network. (2) Also imagine that the 1st switch has a device connected that is sending a multicast stream to the 239.1.1.1 group -- we can pretend it's a PA system and the group traffic is the audio broadcast. (3) Lastly, imagine both the 1st and 2nd switch have endpoints connected that want to receive this group, so they send IGMP reports to join 239.1.1.1. These devices may be the PA loudspeakers that plays the audio. You may already see what the problem is. The 1st switch is receiving the incoming audio stream. And on the 1st switch, it has a loudspeaker connected directly that has expressed to receive the audio via an IGMP report. As a result, the 1st switch will only send the audio out the port with the connected loudspeaker. Since the 2nd switch did not forward the IGMP report from its own receiver toward the 1st switch (no querier is present), the 1st switch has no idea that there is ALSO a receiver on the port leading to the 2nd switch. So that loudspeaker won't get the audio. If we introduce an IGMP querier on the 1st switch, the problem is solved. When the 2nd switch begins to receive IGMP queries from the 1st switch, it will now forward the IGMP reports from its own receiver out the port where the queries come in. So the IGMP reports from the 2nd switch will now reach the 1st switch, allowing it to properly send the audio stream out both of the necessary ports. A final question may be raised: What happens if the IGMP querier is moved to the 2nd switch, but the 1st switch still has the audio sender connected? Turns out we are still OK here. In this case, the 1st switch will again stop receiving IGMP reports from the 2nd switch, because the 2nd switch does not see any incoming IGMP queries from the 1st switch. Although the 1st switch no longer receives IGMP reports from the 2nd switch, it DOES receive IGMP queries from the 2nd switch. With IGMP snooping: Multicast data streams will also get sent out of the ports where IGMP queries come in. This is true even if that port never has any incoming IGMP reports to join the group. Basically all multicast streams flow towards the querier. This also means that to have a best optimized network, it is a good practice to place your querier as close to the source of your multicast traffic as possible. Since multicast traffic always flows to the querier, if the source of the traffic and the querier are at opposite ends of the network, all the multicast traffic will traverse the whole network 100% of the time, irrespective of whether receives exist.
... View more
The MS120 does support a DHCP relay configuration. You go to: Switch --> Routing & DHCP Press the button to add an interface, and select the MS120 as the switch you are adding an interface on. Since the MS120 is an L2 switch these interfaces are limited in their operation. They cannot be used for general purpose routing or full fledge DHCP servers -- only DHCP relay and IGMP querier configurations are supported. The following article explains how a DHCP relay works: https://documentation.meraki.com/MX-Z/DHCP/Configuring_DHCP_Relay Note this article references configurations on an MX instead of MS, but the MS also supports this same relay functionality.
... View more
May 30 2018
9:34 AM
2 Kudos
Adding to what's already been mentioned: If your application is making use of multicast inside an L2 domain (single VLAN) that stretches across multiple switches, and IGMP snooping is enabled on the switches (the default), you will need to be sure that an IGMP querier exists inside the VLAN. If there is no IGMP querier, it can be possible for the multicast streams to be black-holed when they traverse multiple L2 switch hops. IGMP querier configuration documentation can be found here: https://documentation.meraki.com/zGeneral_Administration/Other_Topics/Multicast_support#IGMP_Querier For the most optimal forwarding of the multicast streams (avoiding unnecessary forwarding), it will be best to enable the querier on the switch as close to the multicast sources as possible. NOTE: It is not necessary to enable an IGMP querier on your Meraki switches if another non-Meraki device is connected inside the VLAN that is acting as the querier. If multiple queriers are enabled in a VLAN, only the one with the lowest source IP will be actively querying.
... View more
Apr 25 2018
9:36 AM
5 Kudos
Hey Piet, These warnings are actually ones you don't want to ignore, particularly if you are planning to introduce redundant links down the road. Your network must be on the MS 10.x firmware release, as this is the version where BPDU conflict logging was introduced, as part of overall enhancements to anomaly detection. If I understand correctly your topology is something like this: You've got a switch, ROOT BRIDGE, to which one port has a connection with a Ubiquity AP. Then you have three Meraki switches: Workshop, Admin and Grape Office Each of these three switches has another Ubiquity wireless bridge connected, and all three of them wirelessly bridge back to the AP connected to your root switch. The outcome is that from the perspective of your root switch, all three of these Meraki switches are downstream from the single port with the wireless bridge. This is where your problem is stemming from, and what the logging has identified. The wireless bridge solution is for all intents and purposes acting as a dumb L2 switch. This dumb/unmanaged switch effectively interconnects four of your switches in a star topology: (A) ROOT BRIDGE (B) Workshop (C) Admin (D) Grape Office In addition, you have a 5th "pseudo-switch", the unmanaged L2 switch formed by the wireless bridging, I'll refer to as: (E) Unmanaged Switch For the remainder of this post I will refer to these switches by A, B, C, D and E per the above list. The switches A through D must be running RSTP, not legacy STP, as the warnings you see logged are only produced from segments operating in RSTP mode. Now here's the problem: When any of the four switches A through D transmits a BPDU, the BPDU will be received by switch E (unmanaged switch / wireless bridge). BPDUs are sent to a special destination of 01:80:C2:00:00:00. This is the well-known address used by the IEEE STP/RSTP protocols. If a switch supports STP, when it recevies a BPDU with this special destination address, it does not forward the BPDU out other ports. It just uses the data from the BPDU for its own STP calculations, and then may generate its own BPDUs to send out other ports. But in the case of switch E, (R)STP is not supported, it's just an unmanaged L2 switch. So when E recevies a BPDU from any of the switches A through D, it will just flood the BPDU out all its other "ports" (wireless links in this case). Example: (1) Switch A transmits a BPDU. (2) Switches B, C, and D all receive this same BPDU. This is the root of the problem. It would actually be fine using legacy STP, in which a port won't become forwarding until the expiration of a long timer, which can be around 30 seconds. But with RSTP, ports become forwarding rapidly (hence the name RSTP!). The mechanism RSTP uses to rapidly transition a port into the forwarding state and bypass the 30 second delay of the old standard is through the use of a proposal/agreement negotiation between ports. Instead of wating for timers to expire, a port will send out an initial BPDU with a "proposal" flag set. The other port that receives this "proposal" BPDU will send out a responding BPDU with an "agreement" flag set. This affirmative proposal/agreement process is the primary mechanism used by RSTP to converge faster than the legacy standard which exclusively relies on waiting for timer expirations. The problem: This proposal/agreement mechanism is exlcusively point-to-point, it only works between explicit pairs of ports. It does not work in your scenario where a proposal BPDU from A is received by B, C, and D, with all of them potentially sending their own agreement responses (and then these agreements would again get flooded to all switches!). In this scenario, RSTP convergence becomes undefined and may be unstable and/or introduce temporary loops that could come and go. Now, with your current topology you don't actually have any physical loop, so even with the RSTP convergence imstability, there is no risk of an actual loop forming, as it's physically impossible. However, if you add a redundant link down the road such that there is a real physical loop that relies on spanning tree to be handled, now the door will be open to encounter more impactful problems. Personally, I would recommend not using the single AP on the root switch to wirelessly bridge down to your three other switches. If you add two additional APs and have your three switches wirelessly link up to dedicated APs on the root bridge, then you will avoid this problem scenario.
... View more
My Accepted Solutions
Subject | Views | Posted |
---|---|---|
58136 | Aug 17 2018 9:54 AM | |
6051 | May 30 2018 9:34 AM |
My Top Kudoed Posts
Subject | Kudos | Views |
---|---|---|
47 | 58136 | |
5 | 36897 | |
4 | 55382 | |
2 | 24813 | |
2 | 6051 |