hard to imagine why there isn't an alert for high CRC and packet loss

CraigCummings
Getting noticed

hard to imagine why there isn't an alert for high CRC and packet loss

May 4 11:05 – May 4 12:28
Very high proportion of CRC errors
Very high rate of packet fragments

 

Surely this condition is worthy of an alert.  I just happened to be in the dashboard and noticed a switch port was red.  Or am I meant to log into the dashboard and visit every switch in every network in every org on a regular basis?

 

 

11 REPLIES 11
Adam
Kind of a big deal

I've thought the same.  Every time I've encountered one it's been stumbling into it. 

Adam R MS | CISSP, CISM, VCP, MCITP, CCNP, ITILv3, CMNO
If this was helpful click the Kudo button below
If my reply solved your issue, please mark it as a solution.
PhilipDAth
Kind of a big deal
Kind of a big deal

The problem with this style of error is it comes and goes (usually based on load on the port).

 

You would get a lot of alert emails if there was such an option.

It's an error condition that most likely coincides with diminished network performance.   It indicates some sort of problem that needs to be fixed.  In fact, Cisco obviously agrees, this is why they do currently alert on the condition (in the dashboard only).

 

High CRC errors could be caused by excessive network load, which is also a problem that would need to be addressed.  In my case, it was just a bad cable connection (loose patch cable connection; reseating fixed it).

 

Email alerts could be easily tamed with a simple threshold.  In fact, Cisco has already implemented the threshold.  That's why my error showed "very high" CRC errors and colored the port red.  They also have a "high" range (orange) and a "medium" range (yellow) (as I recall, the error is gone now after reseating cable).

 

Simple solution.  Send email alert when there are "high" or "very high" CRC errors for more than X mins.  I'm basically asking them to add one additional action.  When you make the port turn orange or red, go ahead and send me an email so that I might actually notice it and take action.

 

 

Over a year goes by and Cisco can't even be bothered to respond, much less implement this incredibly useful, very simple request.  The arrogance is unreal. 

Hello,

 

Just wanted to step in and quickly introduce myself as a member of the MS Product Management team. We hear you, thank you for the candid feedback! Absolutely understand the need for email alerts for issues such as CRC errors that can cause network performance issues.

 

Serviceability and logging improvements is a major focus area for us. Over the last year, you might have noticed several improvements in this area. Some examples below:

- Native VLAN mismatch detection

- STP Anomaly detection

- UDLD alerts

- Critical temperature alerts

 

While I don't have an ETA to share with you today, I do want to clarify that CRC error alerting is on our list. We can connect over DM if you've any questions I could help answer. 

It's hard for me to believe that it's a "major focus area" for you when you've only manged to introduce 4 minor features over the span of a year....10 years? into the life of the product. 

 

I still can't get alerts for high CRC and packet loss...or high latency...a year after I first pointed out this glaring oversight/design flaw.   My users still have to call me and tell me about these issues. 

 

Then I have to explain to them why the super expensive fancy security appliance I advised them to purchase/sold them is so expensive if it can't send a simple alert on something so basic. 

 

If you can detect Native VLAN mismatch and STP Anomalies...why can't we get alerts?  If you can detect high CRC and packet loss and high latency...why can't I get an alert?  I'm sure it's not because you can't do it or because it's too hard.  It's either A> a very low priority or B> something you've decided not to do for some reason.  

 

Oh yeah...I still can't see any type of log entry AT ALL for Layer 7 Deny.  That makes troubleshooting connection issues caused by layer 7 rules real easy.  Brought this up with years ago and got a call from someone on the UI team that seemed to have no concept of the difference between Layer 3 and Layer 7 FW. 

 

Meraki is sooooo nice, until you run into these bone-headed, painfully obvious, shortcomings, and you're like, WTF? 

I agree and there are many things they "dont care to implement"

 

When my WAN1 goes down and WAN2 takes over, id like to know via email that WAN1 is down!

 

Lower fail-over times between WAN circuits on MX appliances!!!!! maybe a slider to adjust settings would be nice. Maybe a quick way would be to have a section in SD-WAN that says "Fail-over monitoring" and in that section I can specify what options I want to monitor my WAN link availability, 1. ping, 2. http get, 3. DNS. (if ping fails why do i care to try HTTP or DNS??? O_o) Maybe I fix flow preferences in the SD-WAN tab, basically set WAN2 (cell) to be primary active then set an flow preference saying ANY to ANY that allows for all traffic to flow over the WAN1 link until it fails causing WAN2 to kick in. Maybe I fix static routes so that I can route through multiple WAN links, my logic, set WAN 2 (cell) as primary up-link but have a static route that actually says any subnet is to prefer my broadband ISP gateway while it responds to ping, if it doesn't that static route is removed and WAN2 takes over.

As a customer of Meraki, I would like to not have to tell my customers that there unable to operate because of a locked down Meraki setting on their router that I am unable to adjust or get recognition of this issue. If Cisco, your parent company can have robust IP SLA monitoring options, why cant Meraki? Does Meraki not have the same synergies as Cisco?

I PAY TO MUCH FOR THE LICENSES TO WAIT 2-3 MINUTES FOR THE WAN TO FAIL-OVER TO CELL . If it were the late 90's maybe early 2000's 3 minutes fail-over is fine but for the cost of these licenses in 2019 its simply unacceptable.

F I X T H I S I S S U E ! ! ! !

 

Bossnine
Building a reputation

Just want to bring this up again since I recently ran into the exact same issue.  Just tooling around in a particular network and noticed a 'very high CRC error' on a switch port that turned out to be a faulty optic.  When I asked the users they indicated they had some intermittent troubles, why they didn't tell anyone is irrelevant but it had been going on for while unbeknownst. to me.

Same issue - CRC errors comes and go. Is there any alert been setup on CRC errors from Meraki yet?

odmv
Conversationalist

We also REALLY need this feature. We keep finding ports with high CRC errors and our customers (the business) are asking us why there are no alerts for it. The only way we can find them right now is through SNMP polling which is very limited in itself with Meraki, so we typically don't do it.

 

joemailey
Comes here often

I run in to this issue every so often, would really like to be alerted when it happens.

 

Any update on this?

Get notified when there are additional replies to this discussion.