Terminology Update: Collectives to Businesses
Collectives are now called Businesses! We updated the leaderboard and SupportApp to match the new terminology. This keeps the platform language aligned with upcoming Commercial Community Members (CCM) terms targeted for implementation in July 2026 and creates a more consistent experience for customers and internal teams.
What we did:
Updated the platform leaderboard to show Businesses instead of Collectives.
Updated the SupportApp tooling to reflect the same terminology change.
Why we did it:
To align product language with the upcoming CCM terms.
How to use it:
No action is required. Users will now see Businesses anywhere this leaderboard label previously appeared as Collectives.
However, if you know of a researcher account that wishes to or should be marked as a Business account, please have them submit a ticket via the support portal.
🎉 Agentic Prompt Injection Testing 🎉
We’re excited to introduce Agentic Prompt Injection Testing, the first capability in our agent-driven testing strategy for Security for AI! Initially available through AI Red Teaming and LLM Application Pentesting, this capability validates whether AI systems can be truly exploited under real adversarial conditions—producing reproducible attack traces with evidence-backed findings.
What we did:
Built an agentic testing workflow that can be run as a sidecar execution in AI red teaming and LLM application pentests. It executes structured, goal-driven, multi-turn prompt injection attacks. It tests both direct and indirect injection vectors, including document pipelines and third-party content ingestion, and exercises tool invocation chains and agent delegation workflows to simulate real-world attack paths.
Outcomes include reproducible attack traces, reconnaissance and structured test plans with indicators, and severity-backed findings mapped to industry frameworks, including the OWASP Top 10 for LLM (2025), MITRE ATLAS, the EU AI Act, and NIST AI RMF.
Why we did it:
Prompt injection remains the most common and persistent failure mode in production AI systems.
Existing tools often flag suspicious prompts or rely on static test cases, without proving real exploitability.
Customers need system-level adversarial validation that answers: “Can this AI system be exploited in production?”; not just “Is this prompt suspicious?”
Who it helps:
Security teams responsible for LLM applications who need to scale coverage for prompt injection attack vectors
Organizations deploying AI systems seeking to close the gap between AI adoption and trustworthiness
How to use it:
Agentic Prompt Injection Testing is available as a free, opt-in capability for a limited time within AI Red Teaming and LLM Application Pentests. The capability is currently available as a sidecar run.
AI Red Teaming Enhancements
This release introduces key enhancements to the AI Red Teaming product, now fully established as a first-class engagement type within the platform. We’ve also expanded taxonomy coverage with two new AI weakness clusters: OWASP Top 10 for LLM (2025) and OWASP Top 10 for Agentic Applications, and added support for mapping findings to major industry frameworks, including NIST RMF, MITRE ATLAS, EU AI Act, and ISO 23894 in the summary report.
What we did:
In this release, we:
Introduced AI Red Teaming as a standalone engagement type (previously represented under Challenges).
Added two new AI weakness clusters:
OWASP Top 10 for LLM (2025)
OWASP Top 10 for Agentic Applications
Enabled framework mappings in AI Red Teaming summary reports, including:
NIST RMF
MITRE ATLAS
EU AI Act
ISO 23894
CWE
Why we did it:
Elevate AI Red Teaming to a first-class product experience, improving clarity and differentiation from Challenges.
Align vulnerability classification with latest industry standards and emerging AI security risks.
Help customers map findings to regulatory and compliance frameworks, reducing manual effort and accelerating risk management and remediation.
Who it helps:
All AI Red Teaming customers
Security and compliance teams aligning AI findings with regulatory frameworks
Organizations seeking standardized, industry-aligned AI risk reporting
How to use it:
All enhancements are automatically available within AI Red Teaming engagements, improving the quality and consistency of deliverables without additional configuration.
Alignment to Benchmark Date Ranges in Analytics
Benchmark values on supported Analytics charts now match the same date range the customer selects in the chart. Previously, the benchmark for the current month, quarter, or year showed the previous completed period instead, e.g. the benchmark results were off by one month/quarter/year. After this release, customers will see benchmark values for the actual period they are viewing.
What we did:
We updated benchmark behaviour for supported Analytics charts so the benchmark now reflects the same time period as the chart.
Before this change: if a customer viewed the current month/quarter/year, the benchmark showed the previous month/quarter/year
After this change: if a customer views the current month/quarter/year, the benchmark shows the current month/quarter/year
Note: For the current active time period, benchmark values may change over time as underlying data updates.
Why we did it:
The previous behaviour was confusing because the benchmark did not align with the timeframe shown in the chart, and when looking at an active time period, it did not provide a like-for-like comparison. This update makes the benchmark easier to understand and more accurate.
Who it helps:
Customers using Analytics benchmarks to compare their program against other HackerOne programs.
How to use it:
In supported Analytics charts, view the platform benchmark or create a custom benchmark via the explor view, both of which now match the selected time period.
Documentation:
Easier-to-Read Chart Legends in Dashboard Charts and Explore
Summary:
We improved the readability of chart legends in dashboards and Explore views. These updates give customers more space to focus on the chart while making full legend details easier to view and share in exports and screenshots.
What we did:
Simplified chart legend placement and layout by reducing visual clutter, so charts are easier to read. We then added a simple way to view the full legend when needed. Exports will always include the full legend.
Why we did it:
To help customers read charts more easily and support a common customer need to share charts in screenshots and exports.
Who it helps:
Customers using dashboards and Explore, and those sharing results with internal teams and stakeholders.
How to use it:
This is now automatically available on all charts where previously there was a right-hand legend (for example, Submissions by Asset)
In the dashboard view, click the View all option to see a pop-up scroll list of the legend values.
In the chart view, expand the legend when more detail is needed by clicking Show all to populate all legend values below the chart.
Exporting the chart will include the full legend.
New Submissions Signal Chart with Custom Benchmarks
A new Submissions Signal time-series chart shows trends in the valid submission rate, with peer benchmarking.
What we did:
Added a new chart called Submissions Signal to the Submissions Dashboard. The chart shows the valid submission rate (valid submissions divided by total submissions, multiplied by 100). The line chart includes the current timeframe, the Previous Year, and Platform Benchmark lines, and supports custom benchmarks. The date used is the report's submission date.
Why we did it:
Signal rate supports conversations about program targeting, since it separates submission volume from quality outcomes.
Who it helps:
Customers managing submission quality and triage load.
How to use it:
The chart is available in Analytics on the Submissions Dashboard. To set up custom benchmarks:
Click into the Submissions signal.
Scroll down to the Benchmarks section of the page and click Add benchmark.
Name your benchmark, select your measurement, and choose a line colour for the chart.
Add filters by clicking Add filter.
When you have finished, click Save.
New Median Bounty Rewarded Chart with Custom Benchmarks
A new Rewards Dashboard chart, called Median Bounty Rewarded, has been added, showing median total reward trends with peer benchmarking. Total rewards are made up of BBP & Challenge rewards and bonuses, but do not include Pentests or retests.
What we did:
We shipped a new time series chart called Median Bounty Rewarded with Current timeframe, Previous year, and Platform benchmark lines. The chart uses the reward date as the date point. Total reward equals bounty plus bonus, excluding retest. The chart applies to BBP and Challenge only and excludes Pentests. Platform Benchmark reflects a median-of-medians aggregation across orgs, with the ability to add custom benchmarks.
Why we did it:
Median payout trends give a stable view of reward strategy and competitiveness, and they support business review narratives without reliance on Looker benchmarking.
Who it helps:
Customers managing bounty strategy and program competitiveness.
How to use it:
The chart can be found in Analytics on the Rewards dashboard, appearing next to the Total rewards chart.
To set up custom benchmarks:
Click into the Median bounty rewarded chart.
Scroll down to the Benchmarks section of the page and click Add benchmark.
Name your benchmark, select your measurement, and choose a color for the line on the chart.
Add filters by clicking Add filter.
When you have finished, click Save.




