Country and Region Benchmarking Filter
A new benchmark filter for geography-based peer cohorts has been added to all charts that support custom benchmarking.
What we did:
Added a new Country/Region benchmark filter with NA, LA, APAC, EMEA, and a full country list; requires a minimum of 10 orgs meeting the filter's requirements for the custom benchmark to be applied.
Why we did it:
Improve the relevance of peer comparisons and to sustain QBR workflows.
Who it helps:
Program Managers, CSMs.
How to use it:
Click into a chart that has custom benchmarking functionality.
Scroll down to the Benchmarks section of the page and click Add benchmark.
Name your benchmark, select your measurement, and choose a colour for the line on the chart.
Add filters by clicking Add filter. Select Country/Region and pick the countries/regions you wish to include/exclude from the filter.
When you have finished, click Save.
Note: Once created, benchmarks can be edited or deleted in the chart's Explore view.
Documentation: Docsite page
Custom Benchmarking Added To The Total Rewards Chart
The Total Rewards chart now supports custom benchmarks, so teams can compare rewards trends against a cohort matched to their context.
What we did:
We added custom benchmarking to the Total Rewards chart.
Why we did it:
Customers and CS teams use benchmarking to assess program performance and prepare business reviews. This release advances the migration of customer-eligible benchmarking from the Looker benchmarking dashboard into Analytics.
Who it helps:
Customers who need rewards and ROI narratives with peer context.
How to use it:
Open Analytics -> Rewards.
Click into the Total Rewards chart.
Scroll down to the Benchmarks section of the page and click Add benchmark.
Name your benchmark, select your measurement, and choose a line colour for the chart.
Add filters by clicking Add filter.
When you have finished, click Save.
Bounty Paid Analytics Filter
A new ‘Bounty Paid?’ filter lets users scope dashboards to results that ended in a reward.
What we did:
Added a new filter called ‘Bounty Paid?’ with Yes and No options. When selected, the filter includes only results with a bounty reward (not retest or bonuses). The filter has been applied across many charts on the Submissions, Hacker Engagement, Response Efficiency, Mediation, and the Executive dashboards.
Why we did it:
Users needed a direct way to answer reward-driven questions without manual exports.
Who it helps:
CS and customers reporting on rewarded outcomes and reward conversion.
How to use it:
Open a supported Analytics dashboard, add the ‘Bounty Paid?’ filter, then select ‘Yes’ for bounty-rewarded-only outcomes or ‘No’ for non-rewarded outcomes.
Agentic Validation
Agentic Validation is now generally available — a new Hai capability that helps customers move from "Is this real?" to "What should we do next?" faster, with clear reasoning and evidence.
"The agent excels at typical detection and reasoning. It's really good at finding reports, overlapping reports." — Shopify
"It's wowed the team. They are starting to trust it more and thinking that it's more valuable because it's guiding them and helping them and not leading them astray." — Zoom
"I found the organization of the agent reasoning extremely helpful as far as being able to understand exactly how it came up with the finding. Super helpful to be able to pick through its individual steps." — Veterans United
What we did
Built a multi-agent system that runs coordinated checks on every submitted report: scope validation, eligibility, duplicate detection, and priority assessment
Delivers a single recommended next step with rationale, so analysts get a decision-ready outcome instead of raw data
Surfaced Similar Reports context so analysts can see precedent from past decisions at a glance
Why we did it
Noise is accelerating. Duplicates, out-of-scope submissions, and AI-generated junk reports are growing rapidly, and analysts are spending most of their time filtering noise rather than validating real vulnerabilities. After five months in production, teams saw a 56% reduction in time to validate with a 94% recommendation acceptance rate. Security analysts still handle triage end-to-end; this agent enhances their expert oversight.
Who it helps
Hai Triage analysts: faster, more consistent validation with agent-backed recommendations
Customer security teams: reduced manual workload, consistent outcomes at scale
Self-triaging customers: clear, decision-ready recommendations with visibility into what was checked and why
How to use it
Agentic Validation runs automatically on new report submissions, but doesn't take action on its own; it provides recommendations for human review. For customers using Hai Triage, HackerOne analysts review the recommendations and take action. For customers without Hai Triage, their team can review recommendations and take action directly.
Available to all Hai Triage customers and customers with CTEM Platform entitlements.
Automations REST API is Now Live!
We've shipped a major integration capability yesterday: 6 new REST API endpoints for automations management and monitoring.
What we did:
We introduced 6 new API endpoints that expose automation management and execution capabilities through the HackerOne Customer API.
These endpoints allow customers to:
• Discover automations available within their organization
• Retrieve detailed automation configurations
• View historical execution data and run statuses
• Trigger automations programmatically on demand
• Inspect individual run results and error states
• Retrieve execution logs for debugging and auditing
Why we did it:
Until now, automations could only be triggered and monitored via the web interface. This posed a challenge for customers who wished to incorporate automations into their own operational and security workflows.
By providing access to automations via endpoints, we enable customers to integrate them into their existing tooling ecosystem, making them easier to orchestrate, monitor, and audit at scale.
Who it helps:
Customers can now integrate automations into their own workflows:
CI/CD pipelines can trigger automations after deployments
Monitoring systems can track automation success rates
External schedulers can run automations on custom schedules
Third-party systems can audit automation execution and outcomes
How to use it:
Two real-world examples:
Customers can now set up proactive monitoring that alerts them immediately when automations break; no more lost productivity from failed workflows going unnoticed for days.
Teams can now manage automation code like any software project, track changes, enable peer review, and let multiple stakeholders collaborate without sharing admin credentials.
Custom Benchmarking on the Hacker Participation Chart
The Hacker Participation chart now supports custom benchmarks for peer-cohort comparisons.
What we did:
We added custom benchmarking to the Hacker Participation chart.
Why we did it:
Customers and CS teams use benchmarking to assess program performance and prepare business reviews. This release advances the migration of customer-eligible benchmarking from the Looker benchmarking dashboard into our platform Analytics. Use this when an account asks whether hacker engagement looks low for their peer group.
Who it helps:
Customers tracking Hacker engagement and participation trends.
How to use it:
Open Analytics -> Hacker Engagement.
Click into the Hacker Participation chart.
Scroll down to the Benchmarks section of the page and click Add benchmark.
Name your benchmark, select your measurement, and choose a line colour for the chart.
Add filters by clicking Add filter.
When you have finished, click Save.
Documentation: Docsite
Analytics Dashboard/Chart Point Labels Are Visible By Default
Chart point labels are shown by default, and users can toggle them off when needed, either at the dashboard or explore level.
What we did:
We enabled point labels by default, with a toggle at both the dashboard and explore levels. Explore inherits the dashboard setting on entry, and changes in Explore do not propagate back to the dashboard. Exports include point labels when labels are enabled.
Why we did it:
Chart readability blocked screenshot-ready outputs for decks and business reviews, since users had to hover to see values.
Who it helps:
Customers and CSMs building internal reports and business review materials.
How to use it:
Open an Analytics dashboard and view any supported chart. Point labels appear by default. Use the labels toggle to hide values, and export charts with labels enabled for deck-ready visuals.





