Alert noise



The canned alert “Network Congestion” reveals a potential for a lot of false positives. It is based on RTO mean over 30 s > 190 % of trendline. Thing is, in a normal state, ideally there shouldn’t be ANY RTOs. The result is that devices that normally perform well wind up with a trendline = 0, so every single RTO that occurs in a blue moon will meet the criteria and fire an alert. So, the better the network is functioning, the more network congestion alerts one can expect to get. In the meantime, a server with a steady stream of RTOs will register as perfectly fine.

So, one could set a static threshold instead of a trend-based alert, but I would prefer to do both. In order to use the trend-alert, though, it would need some sort of noise floor that would allow it to ignore single occurrences and panicking every time things aren’t perfect.

Any suggestions on how to eliminate that noise?


With trend alerts, there are ANY and ALL options that allow you to combine conditions. You may be able to accomplish the job with that feature.