I have a Meraki MS350 switch and I want to connect a Windows server that is using the standard Windows network adapter teaming to the switch.
I went into the switch, selected a couple interfaces, and selected "aggregate"
However, when I connect the server with the teamed nics to the switch, one port goes into blocking with the message "port running LACP and LACP has disabled the port"
Something I am doing wrong here?
Solved! Go to solution.
Link Aggregation/EtherChannel/Port-Channel uses different methods of bundling multiple ethernet links into one logical link. Link Aggregation (LACP), Port Aggregation Protocol (PAgP) or "mode on". The terminology is often used interchangeably to describe link bundling. For example, certain Cisco platforms (Catalyst) use the command "show etherchannel summary" while some (Nexus) utilize "show port-channel summary". In short, they are referencing the same feature.
Meraki MS supports LACP only which means that both ends of the link(s) should be able to talk 802.3ad.
Try cycling the port on the switch (or unplug and plug the cable back in).
If that doesn't fix it then it is highly probably that LACP is not enable in Windows, and one of the other teaming options has been used. This looks like a good guide:
http://itprocentral.com/configuring-nic-teaming-using-lacp-in-windows-server-2016/
Thanks!
I noticed I had the same problem Port Trunking with my QNAP NAS.
I had to select 802.3ad on the NAS to activate LACP
No its works perfect! And no longer the fault message in the Meraki Dashboard.
LD
Meraki Support says:
Greetings Rodger,
Thank you for contacting Cisco Meraki Support.
Unfortunately, we don't support EtherChannel to bundle physical port. The only way you can do it by using LACP only.
Here is the documentation specifying that information.
Thank you
Hi
That links broken, does anyone have an up to date one ?
Ian
I have seen the same thing happen. Is "teaming" and aggregation not the same thing in the Meraki world? I couldn't get NIC Teaming from my ESXi host to work. Having the exact same experience. Is teaming without using LACP not supported on a Meraki switch?
"teaming" is a generic term that encompasses several technologies and methods. It's like saying you drive a car.
LACP is a specific method of teaming. It's like saying you drive a Toyota.
If you want "teaming" that does switch assisted load balancing, then you need to use LACP. If you don't care about switch assisted load balancing or switch assisted fault tolerance then you don't need to use LACP, and you don't need to do any special config on the Meraki.
>> "teaming" is a generic term that encompasses several technologies and methods. It's like saying you drive a car.
>> LACP is a specific method of teaming. It's like saying you drive a Toyota.
On the ESXi side, it refers to "NIC Teaming". ESXi's vSwitch does not support LACP. You have to use a Virtual Distributed Switch (VDS) to get LACP support and this requires Enterprise Plus (big boy stuff) licensing which I do not have. In the past, using a DellConnect 2724 (which died so I replaced it with a Meraki MS120-24 switch), I had define a Link Aggregation Group (LAG) for the ports that I wanted to pair with the NIC Teaming ports.
>> If you want "teaming" that does switch assisted load balancing, then you need to use LACP.
>> If you don't care about switch assisted load balancing or switch assisted fault tolerance then
>> you don't need to use LACP, and you don't need to do any special config on the Meraki.
So if I define the NIC Teaming ports on the ESXi host and then just plug the cables into any of the Meraki ports (without aggregating the ports since that requires LACP on the host side which I do not have), then I will get the same teaming that I previously got when I defined a LAG on the old switch?
The LAG you are referring to would be Etherchannel, and Meraki does not support Etherchannel.
Typically I don't configure anything on the VMWare side. VMWare will automatically distribute the VM's across all the NICs in the vSwitch without you having to do anything. Everything the VM's are load balanced automatically.
IMHO, LAG style groups are almost pointless in a VMWare environment.
@PhilipDAth wrote:The LAG you are referring to would be Etherchannel, and Meraki does not support Etherchannel.
Typically I don't configure anything on the VMWare side. VMWare will automatically distribute the VM's across all the NICs in the vSwitch without you having to do anything. Everything the VM's are load balanced automatically.
IMHO, LAG style groups are almost pointless in a VMWare environment.
I had kinda come to the conclusion that Meraki didn’t support Etherchannel.
On the vSwitch for LAG (Etherchannel) it needed to be set to IP hash. It’s set to that now. What should it be set to since it’s plugged into the Meraki since there is no Etherchannel support?
LACP isn't supported by ESXi unless you have Ent Plus. But as @PhilipDAth mentioned, you can just plug all of the physical NIC's on your ESXi host into your MS switch and make sure in ESXi your vSwitch has all of the pNIC's assigned to it.
Link Aggregation/EtherChannel/Port-Channel uses different methods of bundling multiple ethernet links into one logical link. Link Aggregation (LACP), Port Aggregation Protocol (PAgP) or "mode on". The terminology is often used interchangeably to describe link bundling. For example, certain Cisco platforms (Catalyst) use the command "show etherchannel summary" while some (Nexus) utilize "show port-channel summary". In short, they are referencing the same feature.
Meraki MS supports LACP only which means that both ends of the link(s) should be able to talk 802.3ad.
Hi. We are having an issue connecting a new Meraki MS390 to 2 production Nexus 7000 switches using vPC. The Nexus 7K switches have been confirmed to be configured correctly by a Cisco employee & we have multiple working vPCs on them already. The Meraki switch is not sending any LACP packets to the Nexus switches despite documentation saying it should. How do you enable LACP on the MS390s?