LACP issues on Meraki 425 Switch Stack

akan33
Building a reputation

LACP issues on Meraki 425 Switch Stack

Hi all,

 

We have 2 x MS425 running in stack mode with latest stable version 9.32. A cisco switch (we have reproduced this issue with different switches) with a port-channel running LACP against the Meraki stack, 1 link per stack member. When powering off one of the stack members (the one with the active ports against the MX firewall), it also affect the other switch, bringing the whole port-channel down. Have you faced any similar issues?

 

for this test, the design was as follows:

laptop running ping to gateway and internet  -----> Cisco 2960 with port-channel -----> Meraki Stack -----> Meraki MX working in Active/Spare mode ------> Internet.

 

each meraki switch has a connecting to the active and spare firewall (crossed links), no direct link between MX firewall. the 2960 port channel has 1 link to each meraki stacked switch. 

 

 

14 Replies 14
PhilipDAth
Kind of a big deal
Kind of a big deal

That is not necessarily wrong. How long before the ping starts working again?
akan33
Building a reputation

hi Philip,

 

it is down for 1-2 minutes, then the MX gateway starts replying, but I don't get my internet connection back for another 5-6 minutes.

 

I tried it forcing the Port-channel, mode on in the cisco side, surprisingly when I power off the same stack member, the failover is only 12-15 seconds. Issues come when I power it on again, sometimes the port-channel won't come back and I need to sh / no sh the link against the switch that remains up.

 

It is curious as Meraki is working with LACP, but forcing it gave me actually a better result as far as I see.

 

 

PhilipDAth
Kind of a big deal
Kind of a big deal

Interestingly in the guide it says to configure LACP while the ports are down. Doesn't inspire confidence.

https://documentation.meraki.com/zGeneral_Administration/Tools_and_Troubleshooting/Link_Aggregation_...
akan33
Building a reputation

Exactly, I am escalating this issue, so I was just wondering whether other people suffered the same performance. 

PhilipDAth
Kind of a big deal
Kind of a big deal

Ps. I chose not to stack the last ones I put in for a network core. I think the Meraki stacking is not mature enough yet.
akan33
Building a reputation

are you using the MS Warm spare functionality? is HA working properly there? what is a typical topology, port-channel between them i am guessing?

PhilipDAth
Kind of a big deal
Kind of a big deal

Yes we are using warm spare (it is doing all the routing).  Yes it is working perfectly.

 

We disable the stacking function so we can use the 40Gb/s ports.  Then yes, we form a port channel of the two 40Gb/s ports between the switches, We also change the spanning tree priorities so that one is the root (priority 0) and the other is the next best root (priority 4096).

 

rajsingh
New here

Hi,

 

Did you manage to resolve this?  We've just implemented LACP between a Meraki Stack and a Cisco Catalyst 9300 and we're losing connectivity for a few minutes if a switch in the stack is "lost".  This is certainly not behaving as we expected it to.

akan33
Building a reputation

what STP version is your catalyst running? if it is PVST then you need to change it to MST, otherwise you will face long recalculation times every time there's a link down there. 

redsector
Head in the Cloud

We are using PVST with none of these problems. We are using a least version 10.6 on our networks.

 

Distri-RZ-Client#sh spanning-tree summ
Switch is in rapid-pvst mode
Root bridge for: none
EtherChannel misconfig guard is enabled
Extended system ID is enabled
Portfast Default is disabled
PortFast BPDU Guard Default is disabled
Portfast BPDU Filter Default is disabled
Loopguard Default is enabled
UplinkFast is disabled
BackboneFast                 is disabled
Configured Pathcost method used is long

PhilipDAth
Kind of a big deal
Kind of a big deal

@redsector I would not recommend using PVLAN with Meraki switches - because Meraki switches don't run PVLAN.  I've had all sorts of grief in the past with loops forming from mixing the two.

 

Personally, I recommend using mst ("spanning-tree mode mst").  This has all the benefits of rapid spanning tree, while in the default configuration using a single instance of spanning tree for every vlan - making it compatible with Meraki spanning tree.

DrFiber
Here to help

I have used the following config on legacy Cisco Switches connected to Meraki via LACP

 

interface GigabitEthernet0/51
description UPLINK-TO--MDF-425-16-CORE-STK
switchport trunk encapsulation dot1q
switchport mode trunk
channel-group 1 mode active
spanning-tree bpdufilter enable
!
interface GigabitEthernet0/52
description UPLINK-TO-MDF-425-16-CORE-STK
switchport trunk encapsulation dot1q
switchport mode trunk
channel-group 1 mode active
spanning-tree bpdufilter enable

 

I used the bpdufilter enable to get around the pvst - rstp incompatibility

 

 

akan33
Building a reputation

I am just curious is bpdu filter working for you between two switches? you are still receiving BPDUs, I don't think this would be recommended? 

GiacomoS
Meraki Employee
Meraki Employee

Hey all,

 

We have done a number of improvements on quite some stuff in the latest release (including stacking and UDLD). 

 

Can I recommend that you move up to 10.30 (you should be able to schedule your upgrades if they haven't been scheduled automatically already) and check if you are seeing the same behaviour?

 

Thanks!

 

Giacomo

Please keep in mind that what I post here is my personal knowledge and opinion. Don't take anything I say for the Holy Grail, but try and see!
Appreciate who helps and be respectful of every opinion and every solution offered.
Share the love, especially the Meraki one!
Get notified when there are additional replies to this discussion.
Welcome to the Meraki Community!
To start contributing, simply sign in with your Cisco account. If you don't yet have a Cisco account, you can sign up.
Labels