Relay Analytics
Features

Anomaly Detection for Industrial Sensors

Automatically detect spikes, drops, outliers, and flatlines in IIoT sensor data. Relay Analytics uses statistical methods to catch problems before they become costly.

Your Operators Should Not Be Staring at Charts

In most industrial facilities, detecting abnormal sensor behavior still depends on someone noticing it. An operator glances at a dashboard between tasks. A shift supervisor scrolls through trend lines at the end of the day. A quality engineer pulls a report once a week and spots a drift that started five days ago.

By the time a human catches an anomaly through manual observation, the damage is often already done -- product has shipped out of spec, equipment has degraded further, or a minor issue has escalated into a line shutdown.

The challenge is not that the data was unavailable. The data was streaming the entire time. The challenge is that no one can watch hundreds of sensors continuously. There are too many signals, too much data, and too few hours in the day.

Relay Analytics solves this by applying statistical anomaly detection directly to your sensor streams. The system watches every selected sensor, every data point, and flags exactly when and where something deviates from normal. No manual chart review required.

What Anomaly Detection Does

Relay Analytics scans your sensor data and automatically identifies four types of abnormal behavior:

Spikes -- a sudden, sharp increase above normal operating range. A pressure sensor that normally reads 4.0 bar suddenly jumps to 6.8 bar.

Drops -- a sudden decrease below normal range. A temperature sensor that holds steady at 72 degrees falls to 51 degrees within seconds.

Outliers -- values that fall outside the expected statistical distribution for that sensor. Not necessarily a sudden change, but a reading that does not fit the historical pattern.

Flatlines -- a sensor that stops changing entirely. Real sensor data always has small natural variations. When a sensor reports the exact same value for an extended period, it usually means the sensor has failed, the connection is frozen, or the signal is stuck.

Each detected anomaly is classified by severity -- critical, warning, or info -- based on how far the reading deviates from normal. A temperature reading six standard deviations from the mean is flagged as critical. A reading three standard deviations away is a warning. Mild deviations are logged as informational.

Every anomaly also carries a confidence score from 0 to 1. A confidence of 0.92 means the system is highly certain this is a genuine anomaly, not just noise. Confidence scores help your team prioritize: investigate the high-confidence critical alerts first, review the lower-confidence warnings when time allows.

How It Works

Relay Analytics uses three complementary statistical methods that together cover the full spectrum of anomaly types.

Z-Score analysis measures how many standard deviations a value sits from the sensor's mean. It excels at catching sudden spikes and drops -- the kind of events that happen fast and would be easy to miss between dashboard checks.

Interquartile Range (IQR) analysis uses the middle 50% of the data distribution to define what is normal. Unlike Z-Score, IQR is robust against skewed data and existing outliers in the dataset. It catches values that are statistically unusual even when the data does not follow a neat bell curve.

Flatline detection monitors for consecutive identical readings. A vibration sensor reading exactly 0.000 for 200 consecutive data points is almost certainly not measuring real-world vibration. This method catches sensor failures, frozen connections, and hardware faults that the other two methods would miss entirely.

You control the sensitivity of the detection. High sensitivity catches subtle deviations -- useful for precision processes like pharmaceutical dosing or high-accuracy weighing. Low sensitivity filters out noise in rougher environments -- outdoor installations, vibrating machinery, or processes with naturally wide operating ranges. Medium sensitivity is the default and works well for the majority of industrial sensors.

The system also supports configurable aggregation levels. Analyze at 5-second resolution when investigating a specific incident, or use 60-second aggregation when scanning days or weeks of data for patterns.

Real-World Example: Catching Weight Drift Before a Recall

A food processing plant runs a filling station that dispenses product into containers. The target fill weight is 500 grams with a tolerance of plus or minus 5 grams. Regulatory compliance requires every container to be within spec.

Over a Tuesday afternoon shift, the filling station develops a slow calibration drift. The fill weight gradually creeps upward -- 501g, then 502g, then 503g over the course of three hours. On the real-time dashboard, the trend line looks essentially flat. The numbers are still within tolerance. No operator alarm fires.

Relay Analytics runs anomaly detection on the filling station's weight sensor data that evening. The Z-Score analysis flags a sustained upward trend starting at 13:47, classified as a warning-level anomaly with a confidence score of 0.87. The system identifies it as a statistical drift -- the mean of the recent readings is shifting away from the historical baseline.

The quality engineer reviews the alert the next morning, confirms the drift, and schedules a recalibration before the Wednesday production run. The total product affected is limited to a single afternoon shift rather than multiple days of production. No recall is necessary. The cost of the intervention is a 30-minute calibration. The cost of missing it could have been a batch recall, customer complaints, and regulatory scrutiny.

That is the value of automated anomaly detection: the system catches the 2% drift that a human dashboard review would not notice until it becomes a 5% problem.

Key Benefits

  • Continuous monitoring without continuous attention. Every data point is analyzed. No anomaly slips through because someone was on a break or handling another task.

  • Multiple detection methods for comprehensive coverage. Z-Score catches sudden events. IQR catches statistical outliers. Flatline detection catches sensor failures. Together, they cover anomaly types that any single method would miss.

  • Severity and confidence scoring for prioritization. Not every anomaly is equally urgent. The system ranks them so your team addresses critical issues first and reviews informational flags when convenient.

  • Tunable sensitivity for your specific environment. A pharmaceutical clean room and a concrete mixing plant have different definitions of "unusual." Sensitivity controls let you calibrate the detection to match your process requirements.

  • Faster root cause investigation. When something goes wrong, anomaly detection pinpoints exactly when and where the deviation started. Instead of scrolling through hours of chart data, your engineers go directly to the moment that matters.

  • Early warning for gradual degradation. Drift, slow calibration shifts, and progressive sensor degradation are invisible on real-time dashboards. Statistical analysis catches trends that develop over hours or days.

  • Reduced false positives with statistical rigor. The system uses proven statistical methods, not arbitrary thresholds. Confidence scores tell you how certain each detection is, so your team is not overwhelmed by false alarms.

Start Monitoring with Relay Analytics

Anomaly detection is available on every Relay Analytics plan. Connect your sensors, select the ones you want to monitor, and run your first analysis in minutes. The system handles the statistics -- your team handles the decisions.

Start monitoring with Relay Analytics

Connect your sensors and get real-time insights in minutes. No proprietary hardware required.

Get Started Free
Anomaly Detection for Industrial Sensors | Relay Analytics Resources | Relay Analytics