Degraded sources
Degraded Sources Detection (EPS-Based Anomaly Monitoring)
The platform includes an advanced Degraded Sources Detection capability designed to identify data sources that continue to report events but experience a significant reduction in log volume compared to their normal behavior.
Unlike silenced source detection—which focuses on complete loss of logs—this functionality detects partial visibility loss, a common symptom of misconfigurations, infrastructure issues, or adversarial interference with logging mechanisms.
Behavioral Baseline Definition
For each data source, the platform continuously builds a real-time EPS baseline representing the expected volume of events under normal conditions.
The baseline is calculated using historical data from the previous several weeks, taking into account:
Time of day
Day of the week
This temporal awareness is critical, as expected log volumes can vary significantly (e.g., Monday morning versus Sunday night).
The baseline is continuously updated and refined as new data becomes available.
Degradation Detection Logic
A source is considered degraded when its current Events Per Second (EPS) falls significantly below the expected baseline for the same time window.
The platform evaluates degradation in real time by comparing:
Current observed EPS
Expected EPS derived from the baseline
If the deviation exceeds predefined thresholds, a Degraded Source alert is generated.
Severity-Based Alert Thresholds
Degradation thresholds are dynamically applied based on the criticality of the source, as defined in the CMDB:
High
EPS drops below 70% of baseline
Medium
EPS drops below 60% of baseline
Low
EPS drops below 50% of baseline
This approach ensures that more critical assets are monitored with stricter sensitivity.
Low-Volume Source Handling
Sources with very low event volume (low relevance sources) are excluded from standard percentage-based degradation detection to avoid false positives.
For these sources, the platform applies a more robust statistical method, ensuring that alerts are only generated when a meaningful deviation occurs.
Baseline Smoothing (EWMA)
To reduce noise and short-term fluctuations, the platform applies Exponentially Weighted Moving Average (EWMA) smoothing to the baseline:
Default smoothing factor: α = 0.3
Recent data points are weighted more heavily than older observations
This allows the baseline to adapt gradually while remaining stable against transient spikes or drops.
Poisson-Based Detection for Very Low EPS Sources
When a source’s baseline EPS falls below a minimum threshold for standard degradation analysis, the platform switches to a Poisson-based detection model.
In this mode:
The historical average event rate is estimated as: λ (lambda) = mean historical events per minute
The expected distribution of events is modeled using a Poisson process
A degradation alert is triggered if the observed event count falls below the 5th percentile (P5) of the Poisson distribution for the evaluation window
This statistical approach is particularly effective for low-frequency event sources and significantly reduces false positives.
Alert Accuracy and Operational Benefits
By combining:
Time-aware behavioral baselines
Severity-based thresholds
EWMA smoothing
Poisson-based modeling for low-volume sources
the platform achieves high detection accuracy while maintaining a low false-positive rate.
This capability allows SOC teams to detect partial loss of telemetry early, maintain continuous visibility, and quickly identify issues that may otherwise go unnoticed.
Relationship with Silenced Sources Detection
Degraded source detection complements the Silenced Sources Detection Engine:
Silenced sources detect complete log loss
Degraded sources detect abnormal reductions in log volume
Together, they provide comprehensive coverage of log ingestion health and visibility across the entire platform.
Last updated