MS410 pair in a stack
vmnic0 on stack member 1 port 13
vmnic1 on stack member 2 port 13
Currently not configured as a LAG
port settings are trunk with the same native VLAN.
The Vswitch with the port teamed with default settings seems ok except when I unplug vmnic0. I lose the server.
When I enable VMNIC0 again I do see the port go to blocking.
Any ideas or suggested best practice?
Not much to go on but initial guess is your vSwitch isn't failing over properly.
What is your teaming policy and settings?
A few things to try:
- Try manually pinning vmk0 to vminc1, and verify whether the server is still reachable. That will rule out the Meraki side switchport config etc.
- With the vmk0 pinned to vmnic0, try removing the cable on vmnic1 and verify whether the UI shows the link as down. If you have VM's, you can also verify that any that were connected to vmnic1 have successfully failed over to vmnic0
- Ensure that your NIC drivers on the server are up to date.
The meraki stack/switch needs to be a LAG. It is unclear int the VMWARE if it just needed 2 ports plugged in or if it performed Lag based on the loadbalanceing algorithm
Only configure Meraki lag if you are using 802.1AX/802.3ad LACP protocol on your NIC Team.
If you are just using regular load balancing and not running the protocol then you will do more harm than good. Then you should just keep your switchports separate. In that case also make sure you are not load balancing based on destination IP or L4 info because that will cause severe MAC flapping on your switches.
If you configure the VMware side as a LAG, you will need to configure the Meraki side as a LAG too. Load balancing mechanism of SRC/day IP hash.
If you simply have the uplinks in the vswitch, the ports need to be regular ports of the Meraki side. Load balancing must be either route by virtual port ID or source mac hash.
Personally, I wouldn't bother with teaming in this scenario. VMWare will automatically spread VMs across the available NICs, and move them if a NIC fails.
So I would just put the NICs into a single vSwitch in VMWare, and configure the switch ports as ordinary trunk ports.
Simple. Reliable.
Yeah then I need to figure out why basic vanilla nice teaming on 2 different switch ports will not fail over
My NIC Teaming works if I physically unplug the cable from VMNIC0 or VMNIC1.
Previously I was disabling the MS410 switch port.
I am not sure why the Meraki disable switch port does not act as if link is lost.