Skip to main content

Agentic Validation

Organizations: Learn about AI-assisted recommendations for report intake and triage.

Updated today

Note: This feature is currently in beta and is not yet available to all users.

What is Agentic Validation?

Agentic validation is an AI-driven capability that supports report intake within an Agentic Vulnerability Elimination (AVE) workflow. It analyzes incoming security reports and provides structured recommendations that help human reviewers determine how each report should be evaluated and handled.

Agentic validation is explicitly designed as a decision-support tool. It does not automatically take action on reports. Instead, it surfaces recommendations that humans can review, accept, modify, or reject as they see fit.

What Can Agentic Validation Recommend?

Agentic validation can provide recommendations across several key areas of the report intake process. Customers can accept these recommendations, change individual parts of the recommendations (e.g., the closing state) or the comments, or reject the entire recommendation.

Areas where agentic validation can make recommendations include:

Report State Recommendations

Agentic validation suggests closing a report with a specific state, such as:

  • Informative

  • Spam

  • Duplicate (See “Duplicate Detection” below)

When making these recommendations, the system generates:

  • A suggested internal explanation to help the human reviewer understand the reasoning behind the recommendation

  • A proposed comment that could be shared with the reporter, if appropriate

To inform these decisions, these recommendations use a collection of internal tools that operate within the context of a specific customer’s program. These tools draw on historical report data to support tasks such as duplicate detection and precedence analysis, which examines how similar reports have been handled in the past.

Escalation to Human Review

If a report is ambiguous, complex, or would benefit from additional expertise, it may recommend sending it for human review rather than issuing an immediate outcome. If a report is likely to be valid, it will also send the report for further human review.

Duplicate Detection

As part of its report state recommendations, agentic validation helps your team identify when a newly submitted report describes the same vulnerability as a previous submission. This is called Duplicate Detection. To do this, the report is automatically compared against existing submissions to the program, analyzing both the written report content and relevant contextual signals.

Where available, this analysis can also incorporate information present in attached images, such as screenshots or proof-of-concept visuals. This helps better recognize recurring issues that may be described differently in text but represent the same underlying vulnerability.

Duplicate detection does not automatically close reports. Instead, agentic validation surfaces a recommendation along with supporting context. Your team can then confirm the match or override the recommendation if the reports describe different issues, ensuring your team maintains control over all triage decisions.

How Duplicate Detection Works

Incoming reports are analyzed against your program’s history to find potential matches. Here's what happens:

1. Finding Candidates

Report history is searched using multiple methods:

  • Keyword matching — Searches titles and descriptions for similar terms

  • Semantic similarity — Understands meaning, not just exact wording, so reports using different terminology to describe the same issue can still be matched

  • Technical fingerprinting — Compares specific indicators like affected endpoints, vulnerable parameters, and exploitation methods

2. Comparing Reports

For each potential match, a detailed comparison is performed:

  • Same vulnerability type? — Checks whether both reports describe the same category of issue (e.g., both are SQL injection, both are XSS)

  • Same target? — Compares the specific endpoint, parameter, or component affected

  • Same exploitation method? — Examines whether the attack vector and reproduction steps are identical

  • Same root cause? — Determines whether both reports stem from the same underlying flaw

The system weighs these factors together. Two reports might share a vulnerability type but affect different endpoints—that's not a duplicate. Or they might use different terminology but describe the exact same issue—that is a duplicate.

3. Validating the Match

Before a recommendation is surfaced, several checks are applied:

  • The potential original must have been submitted first

  • Self-closed reports without review are not used as originals

  • Resolved issues that reappear may be regressions, not duplicates (see below)

4. Surfacing a Recommendation

The recommendation includes:

  • The proposed original report

  • Key similarities that support the match

  • Guidance on whether you can act immediately or should investigate further

Closing as Duplicate

When you accept a duplicate recommendation, the report is closed as Duplicate with a reference to the original report. The reporter will see their report was closed as a duplicate and can view the original report it was linked to.

If you disagree with the recommendation, you can override it and continue triaging normally.

Duplicates vs. Regressions

Agentic validation distinguishes between true duplicates and potential regressions:

  • Duplicate: A new report describing the same vulnerability that's already been reported and is still open or was recently addressed.

  • Regression: A vulnerability that was previously fixed but has reappeared. If an original report was resolved some time ago, the submission is flagged for investigation rather than recommending it as a straightforward duplicate, because the issue may have returned after being fixed.

When a potential regression is detected, you'll see guidance to investigate whether:

  • The original fix was incomplete

  • A code change reintroduced the vulnerability

  • This is genuinely a new instance that should be tracked separately

What Makes Two Reports Duplicates?

The same vulnerability instance:

  • Identical vulnerability on the exact same target/endpoint

  • Same root cause and exploitation method

  • Reports explicitly reference the same issue

Different instances (not duplicates):

  • Same vulnerability type but different endpoints (e.g., SQL injection on /api/users vs /api/orders)

  • Same systematic issue affecting different components

  • Similar technique but different vulnerable elements

Example: Two SQL injection reports on your website are NOT duplicates if they affect different API endpoints—even if the root cause (inadequate input validation) is the same. They represent two separate vulnerability instances that each need to be addressed.

Special Scenarios

When the Original Report Was Closed

If the proposed original was closed as Informative or Not Applicable:

  • If the same reasoning applies to the new report, it may be recommended as a duplicate

  • If circumstances have changed, the system flags this for your review

When the Same Researcher Submits Both

If a researcher submits a report similar to one they previously submitted:

  • If no new information is provided, it may be a straightforward duplicate

  • If additional details or evidence are included, the system flags for review to determine if the new information changes the assessment

When Vulnerability Types Differ

Even if other characteristics match, reports with different vulnerability classifications (e.g., XSS vs. CSRF) are flagged for human review rather than marked as clear duplicates.

Severity Recommendations

Agentic validation can also incorporate input from the Severity Agent when suggesting a severity level for a report. In this workflow, agentic validation uses the Severity Agent’s CVSS-based assessment as an input to its recommendation, while the final severity decision always remains with the human reviewing the report.

Asset Detection

Agentic validation can identify the asset that is affected by the security report. By determining which asset a report applies to, this analysis helps reviewers, both human and agentic, better understand the context of the finding.

Accurate asset identification enables several important checks, including:

  • Whether the reported issue is in scope for the program

  • What bounty tier or reward range the asset may be eligible for

  • How the vulnerability should be evaluated in terms of impact and remediation priority

By surfacing asset context early, agentic validation helps reduce ambiguity and supports more consistent, well-informed report evaluation.

Weakness (CWE) Detection

Agentic validation can also identify the class of weakness described in a security report, using the Common Weakness Enumeration (CWE) framework.

Detecting the relevant CWE helps reviewers and customers understand:

  • The type of vulnerability being reported

  • How the issue fits into known vulnerability categories

This classification provides additional structure and context when assessing report validity and quality, and it supports clearer communication between researchers, customers, and reviewers.

What do Customers and Researchers See?

For a given report, end-user customers may see agentic validation output surfaced on the Report page. This information is intended to help customers better understand how the report was evaluated, including signals related to the report’s validity, quality, and alignment with program guidelines.

These outputs provide additional context for customers and human reviewers, helping them understand why a report was considered valid, sent for further review, or closed with a particular outcome. They are not automated decisions, but supporting signals that inform the final judgment.

Customers can also view the agentic validation reasoning alongside the outputs. This reasoning highlights the key factors and signals that informed the recommendation, providing additional transparency into how the report was evaluated and why a particular outcome was suggested.

In all cases, humans retain full control. They can edit, reject, or replace any suggested outcome or comment before it is finalized or shared.

Researchers do not see the agentic validation recommendations directly. However, if a report is closed based on an agentic validation recommendation, a comment generated or assisted by AI may be posted to the report.

Common Customer Questions

Does agentic validation automatically close reports?

No. Agentic validation does not take action on its own. All recommendations must be reviewed and applied by a human, who remains fully responsible for the final decision.

Does this change how quickly reports are triaged?

It can, while the agentic validation does not enforce timelines or replace existing SLAs, we do actively monitor the time it takes for reports to complete the triage process.

In practice, we have observed improvements in triage completion time when this capability is in use, compared to workflows without it. By surfacing relevant context, historical precedent, and early recommendations, it helps reduce manual effort and decision time for both customers and human reviewers.

Actual triage timelines still depend on report complexity and each program’s workflow, but agent-assisted triage has shown measurable efficiency gains.

How are recommendations determined?

A range of agentic tools is used to inform recommendations. These include checks for commonly informative report types, asset scope validation, and precedence analysis. Together, these help ensure that program guidelines are respected and that recommendations are consistent with how similar reports have been handled historically.

Will researchers know an AI was involved in the decision?

Researchers may see comments that were generated or assisted by AI, but they do not interact directly with the agent. Humans remain responsible for all communication and final outcomes.

Is customer data shared across all HackerOne Customers?

No. Agentic validation operates strictly within the context of an individual customer’s organization. Historical analysis and recommendations are scoped to that program and are not shared across customers. Read more on Hai’s security in our Security & Trust documentation.

Did this answer your question?