Bounty Insights
We’ve launched Bounty Insights, a new platform capability that transforms historical bug bounty findings into actionable intelligence. With Bounty Insights, customers can identify systemic weaknesses, track program effectiveness, and plan smarter testing strategies — turning their bounty programs into continuous improvement loops. The result: better visibility, smarter testing, and measurable security ROI powered directly by each customer’s own program data.
What We Did:
We built Bounty Insights to:
Surface vulnerability patterns, attack themes, and systemic weaknesses.
Track recurring findings to measure program maturity and fix effectiveness.
Generate recommendations to guide future pentests, challenges, and spot checks.
Why We Did It:
Customers want to see progress, prove ROI, and use their own data to guide decisions. Historically, vulnerability data lived in isolation — making it hard to connect trends or measure improvement. Bounty Insights addresses that by:
Using real attack data to identify recurring weaknesses.
Helping teams plan targeted, high-impact testing.
Enabling CSMs and admins to demonstrate measurable program maturity.
Supporting renewals and expansions through clear proof of value.
This aligns directly with our goal to deliver continuous learning loops and actionable intelligence across the vulnerability lifecycle.
Who It Helps:
Program Admins: Track recurring issues, measure program effectiveness, and improve testing strategy.
Security Teams: Focus remediation on the most impactful areas and reduce repeat vulnerabilities.
Executives: Visualize progress and communicate security program maturity to stakeholders.
Learn more on the Bounty Insights doc!
Report Assistant Agent
We’re excited to begin the phased rollout of the Report Assistant Agent! Integrated directly into the report submission flow, it helps researchers build complete, structured, and guidelines-compliant vulnerability reports from the start. The Report Assistant streamlines submissions, reduces errors, aligns scope, and improves overall report quality before submission. The result: fewer incomplete or out-of-scope reports, faster triage, and more actionable findings for customers.
What We Did:
We integrated Hai Report Assistant Agent into the hacker submission form as a real-time assistant that:
Suggests clear, professional rewrites of vulnerability descriptions
Flags missing fields such as repro steps or impact details
Runs pre-checks for scope violations and common ineligible findings (CIF/CIR)
Validates report completeness without blocking submission
This system combines co-writer and reviewer roles, making it the first step in an AI-assisted vulnerability management lifecycle.
Why We Did It:
Incomplete or inconsistent reports waste analyst time, slow down validation, and delay customer impact. By guiding researchers before submission, we:
Enforce structure and reduce invalid submissions
Accelerate validation and delivery
Empower hackers to submit higher-quality, policy-aligned reports
Free analysts to focus on risk and remediation
Who It Helps:
Hackers: Real-time guidance that improves accuracy and time to bounty
Analysts: Cleaner, structured reports that cut manual triage work
Customers: Faster delivery and more consistent, high-signal findings
Learn more on the Report Assistant doc!
New Custom Benchmark Filters
We’ve expanded our custom benchmarking capabilities by adding new filters that let customers refine their comparisons and insights with much greater precision!
What we did:
Across all four releases, we introduced four new custom benchmark filters: Program Type, Clear Verified, ID Verified, and Gateway, and applied them across the following Custom Benchmarking dashboards and also made them available for relevant charts on the Executive Dashboard:
Submissions Charts
All 4 Response Efficiency Charts
Bounty Table Benchmarking Charts (this included adding Custom Benchmarking to these charts)
Why we did it:
Customers want benchmarks that reflect the real conditions of their programs. Until now, they could not make such specific comparisons, which often meant results felt less actionable. By giving them these new filters, we make it possible to:
Run apples-to-apples comparisons (e.g., Gateway programs with ID verification).
Improve trust and confidence in the insights they’re getting from our platform.
Who it helps:
CISOs, Program Managers, and Analysts who need relevant benchmarks for decision-making.
How to use it:
Within any Custom Benchmarking dashboard (Submissions, Response Efficiency, or Bounty Table):
Select one or more filters, now including the new filters (Program Type, Clear Verified, ID Verified, Gateway).
Apply the custom benchmark to refine your benchmarking dataset.
Review charts and insights that now reflect programs more similar to theirs.
Learn more: Dashboard doc


