Skip to main content

Agentic Validation

Organizations: Learn about AI-assisted recommendations for report intake and triage.

Updated this week

What is Agentic Validation?

Agentic validation is an agentic workflow that supports triage by running consistent checks and consolidating the results into an evidence-backed recommendation for human approval. It analyzes incoming security reports and provides structured recommendations that help human reviewers determine how each report should be evaluated and handled.

Agentic validation is explicitly designed as a decision-support tool. It does not automatically take action on reports. Instead, it surfaces recommendations that humans can review, accept, modify, or reject as they see fit.

Agentic Validation is available for all Hai triage customers or customers with the CTEM platform entitlement.

How it Evaluates Reports

The workflow evaluates each incoming report across key decision points, including:

  • Scope alignment: Does the report match what your program accepts?

  • Policy eligibility: Does it meet program rules and requirements?

  • Duplication: Does it match known issues or prior submissions?

  • Priority guidance: What priority fits based on evidence and program precedent?

It then merges these outputs into a single recommendation with a supporting rationale, so teams can move from the report to the next steps faster without sacrificing consistency.

What Recommendations it Provides

It provides a single consolidated recommendation, supported by evidence from your program’s history. Reviewers can accept it, edit parts of it (like outcome, priority, or comments), or reject it.

It can recommend:

  • Outcome: suggested report state (for example: duplicate, spam, informative) and whether to route for deeper review

  • Reasoning and messaging: an internal explanation plus a draft comment for the reporter

  • Duplicate detection: likely matches to prior reports with supporting similarities (it never closes automatically)

  • Context for prioritization: suggested priority inputs like affected asset, weakness category (CWE), and severity signals (including CVSS-based guidance when available)

Report State Recommendations

It suggests closing a report with a specific state, such as:

  • Informative

  • Spam

  • Duplicate (See “Duplicate Detection” below)

When making these recommendations, the system generates:

  • A suggested internal explanation to help the human reviewer understand the reasoning behind the recommendation

  • A proposed comment that could be shared with the reporter, if appropriate

To inform these decisions, these recommendations use a collection of internal tools that operate within the context of a specific customer’s program. These tools draw on historical report data to support tasks such as duplicate detection and precedence analysis, which examines how similar reports have been handled in the past.

Escalation to Human Review

If a report is ambiguous, complex, or would benefit from additional expertise, it may recommend sending it for human review rather than issuing an immediate outcome.

If a report is likely to be valid, it sends the report for further human review to continue with issue reproduction.

Duplicate Detection

As part of its report state recommendations, it helps your team identify when a newly submitted report describes the same vulnerability as a previous submission. This is called Duplicate Detection. To do this, the report is automatically compared against existing submissions to the program, analyzing both the written report content and relevant contextual signals.

Where available, this analysis can also incorporate information present in attached images, such as screenshots or proof-of-concept visuals. This helps better recognize recurring issues that may be described differently in text but represent the same underlying vulnerability.

Duplicate detection does not automatically close reports. Instead, agentic validation surfaces a recommendation along with supporting context. Your team can then confirm the match or override the recommendation if the reports describe different issues, ensuring your team maintains control over all triage decisions.

You’ll receive a suggested severity level using CVSS-based assessment signals. It uses these signals to support the overall intake recommendation.

A human reviewer makes the final severity decision and can accept, adjust, or reject the suggestion.

Learn more about this feature in our Agentic Duplicate Detection document.

Asset Identification

This analysis can identify which asset a security report affects. It helps reviewers understand the context of the finding and apply the right scope, policy, and prioritization decisions.

Accurate asset identification enables several important checks, including:

  • Whether the reported issue is in scope for the program

  • What bounty tier or reward range the asset may be eligible for

  • How the vulnerability should be evaluated in terms of impact and remediation priority

By surfacing asset context early, agentic validation helps reduce ambiguity and supports more consistent, well-informed report evaluation.

Weakness (CWE) Detection

The class of weakness described in a security report is identified using the Common Weakness Enumeration (CWE) framework.

Detecting the relevant CWE helps reviewers and customers understand:

  • The type of vulnerability being reported

  • How the issue fits into known vulnerability categories

This classification provides additional structure and context when assessing report validity and quality, and it supports clearer communication between researchers, customers, and reviewers.

How Recommendations Appear

For a given report, end-user customers will see agentic validation output surfaced on the Report page. This information is intended to help customers better understand how the report was evaluated, including signals related to the report’s validity, quality, and alignment with program guidelines.

These outputs provide additional context for customers and human reviewers, helping them understand why a report was considered valid, sent for further review, or closed with a particular outcome. They are not automated decisions, but supporting signals that inform the final judgment.

Customers can also view the agentic validation reasoning alongside the outputs. This reasoning highlights the key factors and signals that informed the recommendation, providing additional transparency into how the report was evaluated and why a particular outcome was suggested.

In all cases, humans retain full control. They can edit, reject, or replace any suggested outcome or comment before it is finalized or shared.

Researchers do not see the agentic validation recommendations directly. However, if a report is closed based on an agentic validation recommendation, a comment generated or assisted by AI may be posted to the report.

Common Customer Questions

Does agentic validation automatically close reports?

No. Agentic validation does not take action on its own. All recommendations must be reviewed and applied by a human, who remains fully responsible for the final decision.

Does this change how quickly reports are triaged?

It can, while the agentic validation does not enforce timelines or replace existing SLAs, we do actively monitor the time it takes for reports to complete the triage process.

In practice, we have observed improvements in triage completion time when this capability is in use, compared to workflows without it. It helps reduce manual effort and decision time for both customers and human reviewers by surfacing relevant context, historical precedent, and early recommendations.

Actual triage timelines still depend on report complexity and each program’s workflow, but agent-assisted triage has shown measurable efficiency gains.

Learn more about Hai Triage.

How are recommendations determined?

A range of agentic tools are used to inform recommendations. These include checks for commonly informative report types, asset scope validation, and precedence analysis. Together, these help ensure that program guidelines are respected and that recommendations are consistent with how similar reports have been handled historically.

Will researchers know an AI was involved in the decision?

Researchers do not see internal recommendations. Researchers may see comments that were generated or assisted by AI, but they do not interact directly with the agent. Humans remain responsible for all communication and final outcomes.

Is customer data shared across all HackerOne customers?

No. Agentic validation operates strictly within the context of an individual customer’s organization. Historical analysis and recommendations are scoped to that program and are not shared across customers. Read more on Hai’s security in our Security & Trust documentation.

Did this answer your question?