We have been using MS-410-32 switches in a stack configuration since December 2025, with a direct stacking connection (port 37 to port 37). From the provisioning stage, we have consistently encountered a critical issue where customers were unable to access services, prompting us to manually reboot the devices.  Upon reviewing the logs, we consistently found the following messages:  Switch 1:  port: 37, old: 40Gfdx, new: down  Switch 2:  port: 37, old: 40Gfdx, new: down     Troubleshooting Steps Performed:    Firmware Upgrade    Upgraded to Firmware MS 17.2.1 (latest stable release at the time).      Meraki Support Case    Opened a support case via Meraki Dashboard.      Stack Cable Testing    Used a different stack cable from another site.    Reconnected using cross-stack configuration (port 37 to port 38 and port 38 to port 37) as recommended by Meraki Support.    However, the same log (port down) appeared immediately after connecting the stack cable.      Cross-Device Testing    Tested the same stack cable with MS-210 switches – no issues or port down logs were observed.    Reconnected the cable to the MS-410 stack again for further monitoring.      Extended Monitoring    After 10 days, the same symptoms reappeared:    Stack ports went down.    Both MS-410 switches rebooted unexpectedly.        Switch 1   Port 20 STP role change disabled → designated  Port 20 status change down → 1Gfdx  Port 20 STP role change designated → disabled  Port 20 status change 1Gfdx → down (repeats several times)  Port 19 STP role change disabled → designated  Port 19 status change down → 1Gfdx  Port 19 STP role change designated → disabled  Port 19 status change 1Gfdx → down  Port 37 status change down → 40Gfdx  Port 37 status change 40Gfdx → down  Port 38 status change 40Gfdx → down   *** Logs show STP role/status changes across all ports, indicating a full topology change (not isolated to port 19/20)     Switch 2   port: 37, old: 40Gfdx, new: down     Does anyone have any ideas on how to resolve this issue? We have been experiencing this problem since January 2025 and have not been able to fix it yet. It is causing significant impact on our operations.  
						
					
					... View more