Platform

Data Quality & Observability

Detect anomalies anywhere in your data, in real time

Lineage

Get to the root cause and resolve issues quickly

Data asset insights

Discover data assets and understand how they are used

Discover the product for yourself

Take a tour
CustomersPricing

Learn more

Customer stories

Hear why customers choose Validio

Blog

Data news and feature updates

Reports & guides

The latest whitepapers, reports and guides

Get help & Get started

AllianceBernstein drives data trust and accurate reporting

Watch the video
Product Updates

Validio 7.0: Smarter anomaly detection with AI feedback and retraining

November 4, 2025
Sophia GranforsSophia Granfors

Continuous iteration and improvement is core for precise anomaly detection. With Validio 7.0, we’re taking another step toward making data quality monitoring smarter and more adaptive.

This release introduces improvements to model feedback and retraining, allowing your AI-powered anomaly detection to learn directly from your input and become more accurate over time.

In this blog post, we'll cover how the updated model feedback works. See the changelog for all details of the 7.0 release.

How model retraining works

At the core of Validio’s dynamic, AI-powered thresholds is a continuously learning model. While the model is precise by default, only you truly know your data. By giving feedback on false positives (incorrectly flagged anomalies) and false negatives (missed incidents), you teach the model to understand your data’s true behavior.

Every piece of feedback contributes to a targeted retraining process for that specific segment of data. This makes anomaly detection more context-aware and precise - reducing noise without missing important deviations.

The anomaly detection model is designed to ignore outdated feedback that no longer fits your data. If the structure or scale of a metric changes (for example, a conversion rate definition changes in your warehouse), the model automatically excludes older feedback that’s no longer relevant.

This ensures retraining remains grounded in your current data context.

False positive feedback: avoiding false alarms

A false positive occurs when the model flags a data point as anomalous even though it’s within an acceptable range.

When you mark an incident as False Positive, Validio learns that it was too sensitive in that situation and widens the threshold bounds slightly for similar contexts in the future.

This feedback loop helps the model become more tolerant to normal fluctuations, reducing noise and alert fatigue across your monitors.

Let's look at an example:

If your TOTAL_SALES_AMOUNT validator flags small, expected spikes as anomalies, marking them as False Positive tells the model to relax around that magnitude, so future data points in a similar range aren’t incorrectly flagged again.

False negative feedback: make sure no issues are missed

A false negative happens when a genuine anomaly slips through because the model’s bounds have become too wide, often after periods of high variance.

You can now click directly on a non-incident data point in the graph and mark it as False Negative. This feedback tells the model that it wasn’t sensitive enough, prompting it to tighten its decision bounds for that data pattern.

By providing false negative feedback, the recalibration becomes faster, resulting in improved detection of subtle, context-specific anomalies, without introducing excess noise.

Let's take another example:

Marking a non-incident data point, that should have been flagged as an incident, as a False Negative tells the model that it wasn't sensitive enough and needs to tighten the decision bounds.

Reverting feedback

Validio 7.0 also introduces the ability to revert feedback. If you accidentally flagged an issue as a false positive or negative, or if context has changed, you can easily revert it. Ctrl+Z for your model feedback.

  • To undo False Negative feedback, click the data point again and toggle it back.
  • To undo False Positive feedback, simply change the incident status to another state (like Triage).

This gives you full control over your feedback history and model behavior.

When to adjust sensitivity vs. giving model feedback

There are multiple ways to adapt the AI-powered thresholds in Validio. Both sensitivity and model feedback influence how the threshold behaves, but they serve different purposes. In practice, use sensitivity to establish your baseline tolerance, and model feedback for precise, ongoing refinement.

With the new model feedback and retraining capabilities in Validio 7.0, your monitoring system becomes a living model of your data, adapting to its unique rhythms and evolving as your business does.

The result:

  • Fewer false alarms
  • Better detection of meaningful anomalies
  • Smarter, context-aware thresholds that improve automatically

Want to try Validio 7.0?

Get in touch

Book a demo