Skip to main content

Bug Bounty Maturity Framework

Organizations: A tiered guide to evaluating and strengthening your bug bounty program operations.

Updated today

Overview

This framework is a guide for operational excellence in bug bounty programs, not a mandate. It answers the question programs consistently ask: "What should we be doing? Are we running a good program?"

The practices are organized across three maturity tiers (Baseline, Competitive, Exemplary) and consolidate feedback from HackerOne's Hacker Advisory Board (HAB) and Technical Advisory Board (TAB) throughout 2025, representing what researchers and customers value most in program operations.

Key Principles

This is guidance, not a checklist. There is no enforcement mechanism or mandatory adoption. Programs can adopt practices at their own pace based on priorities and resources.

Programs can mix tiers. A program might be Competitive in Communication but Baseline in Report Handling. This framework helps identify strengths and gaps as it's a diagnostic tool, not a strict label.

Context matters. Programs with regulatory constraints or corporate guidelines may not be able to adopt every practice as written. The goal is to achieve each practice's intent in a way that works for your organization.

Tier definitions:

  • Baseline. Foundations for Program Success: The operational foundation researchers expect when engaging with a program.

  • Competitive. Differentiation That Attracts Top Talent: Practices that make programs stand out and get recommended by researchers.

  • Exemplary. Aspirational Best-in-Class: Exceptional practices requiring significant resources; recognized, not expected.

All practices are grounded in validated HAB/TAB feedback and aligned with HackerOne Platform Standards.

1. Communication & Transparency

Response Timelines

Practice Area

Baseline

Competitive

First Human Response

3-Business Day First Human Response Target

Once assigned for program review, programs commit to providing an initial human response to new vulnerability reports within three business days of submission, acknowledging receipt, and setting expectations for next steps. An automated acknowledgement, in the absence of substantive action or engagement, does not satisfy this requirement; researchers should receive substantive engagement from a security team member.

2-Business Day First Human Response Target

Once assigned for program review, programs commit to providing initial human responses to new vulnerability reports within two business days, with accelerated response for Critical severity submissions. Automated acknowledgements do not satisfy this requirement; researchers should receive substantive engagement from a security team member.

Follow-up Responses

2-Business Day Response to Researcher Updates

When researchers provide requested information or respond to program questions (after NMI status), programs respond within two business days to maintain communication momentum.

1-Business Day Response to Researcher Updates

When researchers provide requested information after NMI status, programs respond within one business day to maintain the momentum of rapid communication.

Severity Transparency & Justification

Practice Area

Baseline

Competitive

Examples

Severity Communication

Proactive Severity Justification

Programs provide clear, detailed justifications for all severity decisions, whether agreeing with or changing the researcher's assessment. Explanations include the impact evaluation criteria applied, the severity framework used, and the business context considered, ensuring shared understanding even when assessments align.

Collaborative Severity Discussion

Programs engage researchers in collaborative discussion when severity assessments differ or when decisions involve subjective judgment. Teams proactively explain their reasoning framework, edge-case considerations, and specific business context while seeking to understand the researcher's perspective and working toward a shared understanding rather than unilateral determinations.

Baseline:

"We've assessed this as High severity rather than Critical. While the SQL injection allows data extraction, the affected endpoint only exposes non-sensitive metadata (user preferences, UI settings) rather than PII or authentication data. Our CVSS score reflects the limited confidentiality impact: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N (5.3 Medium adjusted to High given direct database access)."

Competitive:

"We'd like to discuss the severity assessment. We've initially scored this as High, but we see your argument for Critical. Our reasoning: the endpoint requires authentication (PR:L), limiting the attacker pool. However, we understand your concern about the potential for privilege escalation. Could you share more about the escalation path you envision? We want to ensure we're evaluating the full attack chain."

Triage & Bounty Timelines

Practice Area

Baseline

Competitive

Examples

Triage Speed

Defined and Published Triage Response Target

Programs establish and publicly communicate a specific timeline commitment to complete initial triage (determining whether a report represents a legitimate vulnerability).

Aggressive Triage and Bounty Timelines

Programs establish triage and bounty timelines that exceed baseline expectations, with triage decisions measured in days rather than weeks and accelerated bounty processing.

Baseline: "Our triage Response Target: Critical - 3 business days, High - 5 business days, Medium/Low - 10 business days. These timelines begin when we have sufficient information to reproduce the issue."

Competitive: "Our triage targets: Critical - 24 hours, High - 48 hours, Medium - 5 days, Low - 10 days. For Critical submissions, our on-call engineer is paged immediately."

Response Target Accountability

Response Target Transparency and Communication

Programs proactively acknowledge when Response Targets are missed or at risk of being missed, communicate delays, and set new expectations with researchers. When programs cannot meet published timelines, they notify researchers in advance when possible and provide context for delays, ensuring researchers understand the status rather than being left wondering why commitments weren't met.

Discretionary Goodwill Gestures for Delays

When extensive delays occur due to factors within the program's direct control, programs may consider extending discretionary goodwill gestures, such as access to future new campaigns, upgraded test accounts, positive testimonials, or even a small bonus applied on a case-by-case basis. compensation to researchers as a goodwill gesture. This may include bounty increases, bonus payments, or other recognition mechanisms, applied on a case-by-case basis.

Baseline: "We won't meet our 14-day triage response target on your report due to an influx of new reports. We expect to complete triage by [date]. We apologize for the delay - your report remains a priority."

Competitive: "This report exceeded our Response Target by 10 days. As a goodwill gesture, we’ve taken a moment to write a review about the impact of your report on your profile. Thank you for your patience."

Visibility & Status Updates

Practice Area

Baseline

Competitive

Examples

Update Frequency

Periodic Status Updates on Active Reports

Programs provide regular status updates to researchers on their active reports, particularly for reports that remain open for extended periods, explaining any delays and setting expectations for next steps.

Proactive or Researcher-Requested Status Updates

Programs keep researchers informed through one of two approaches: (1) proactive updates at meaningful intervals based on report activity and severity, or (2) a responsive model where researchers can request status updates and receive substantive responses within two business days. Updates should provide honest visibility into the current status and, where known, expected timelines. Programs select the model that best suits their operational capacity, while ensuring researchers are never in the dark without a path to information.

Baseline: "Status update on report #12345: The vulnerability has been confirmed and assigned to our engineering team. Current status: fix in development, targeting next sprint. We'll update you when the fix reaches staging."

Competitive:Update on your report: The fix is in active development with our platform team. We're targeting our late February release cycle, though this may shift depending on QA. I'll follow up once we have a confirmed deployment date, or feel free to ping me if you'd like a status check before then.”

Direct Communication Channels

Practice Area

Baseline

Competitive

Platform Communication

Responsive Platform Communication

Programs maintain responsive, professional communication through HackerOne's report comments, ensuring researchers can reliably reach program teams. Programs respond to researcher questions and comments in a timely manner, providing substantive answers when possible or setting clear expectations for when information will be available.

Direct Security/Engineering Team Access

Programs may offer direct communication channels (e.g., Slack, dedicated email, scheduled calls) to experienced, trusted researchers for complex submissions where real-time collaboration materially accelerates resolution. Programs retain discretion to pause direct communication or route discussions to HackerOne mediation if interactions become unproductive, contentious, or overly time-consuming.

2. General Best Practices

Consistency & Quality

Practice Area

Baseline

Competitive

Consistency

Consistent Report Handling Within Program

Programs handle similar reports consistently, applying a uniform logic for severity assessment, deduplication decisions, bounty amounts, and communication style across all researchers and over time.

Multi-Program Consistency Standards

Organizations running multiple bug bounty programs establish consistent structures for scope definition, bounty ranges, guidelines language, and operational practices across all programs.

Legal & Disclosure Framework

Practice Area

Baseline

Competitive

Disclosure Guidelines

Coordinated Disclosure Guidelines with Private Option

Programs establish and communicate clear guidelines on how and when vulnerability information can be disclosed. Programs define a coordinated disclosure timeline (typically 90 days from validation or from the deployment of the fix, whichever comes first) during which they commit to remediation. Public programs offer a mutually agreed-upon redacted disclosure option and respect researchers' right to disclose responsibly after the coordinated period expires. When public disclosure is not possible, programs offer alternative recognition such as Hall of Fame listings, public thanks, or awards to acknowledge researcher contributions.

Disclosure Rate Commitment

Programs commit to supporting public disclosure on a meaningful percentage of resolved reports, acknowledging that some disclosures may be blocked by legal, regulatory, or operational constraints. Programs communicate their disclosure philosophy transparently and work with researchers to find disclosure paths (full, redacted, or anonymized) wherever possible. For cases requiring sanitization, programs offer a streamlined review process rather than creating friction that discourages disclosure.

Legal Protection & Privacy

Standard Safe Harbor & Privacy:

Short explicit safe-harbor promise: we will treat security research conducted in compliance with the program as authorized, will not initiate legal action for good-faith, guidelines-consistent testing, will assist/advocate if a third party sues, and will protect researcher identity and personal data unless disclosure is authorized.

Gold Standard Safe Harbor:

Stronger, affirmative legal protections and guidance, including a clear definition of Good Faith Security Research and an explicit waiver of conflicting TOS/AUP restrictions. See HackerOne Gold Standard Safe Harbor: https://hackerone.com/security/safe_harbor

Security Contact Discoverability

Practice Area

Baseline

Competitive

Contact Information

Public-Facing Vulnerability Disclosure Program

Programs maintain a publicly accessible Vulnerability Disclosure Program (VDP) as the clear entry point for security researchers. All vulnerability reports should be directed through the VDP rather than security team email addresses, which can bypass triage workflows and create fragmented communication channels. This ensures reports enter the proper pipeline and receive consistent handling.

Security.txt Deployment

Programs deploy RFC 9116-compliant security.txt files on all in-scope domains, making security contact information(link to program) machine-readable and easily discoverable directly on assets, demonstrating a commitment to industry standards.

Collaboration & Testing Support

Practice Area

Baseline

Competitive

Testing Guidelines

Collaboration & Testing Requirements Guidelines

Programs clearly document any special testing requirements (such as custom headers, rate-limiting exemptions, and notification procedures) to enable effective vulnerability research without disrupting operations.

Testing Infrastructure & Provisioning

Programs offer paid accounts (or cost reimbursement), premium features, and testing credentials (API keys, enterprise licenses) to eliminate financial barriers, along with sandbox or staging environments for safe validation. Staging supplements but does not replace production testing, so that researchers retain access to production where needed for feature chains or production-only functionality. Programs verify credentials regularly and document any differences between staging and production environments.

Collaboration Features

Platform Collaboration Feature Enabled

Programs enable HackerOne's collaboration features, allowing researchers to work together on complex findings and share recognition appropriately when teamwork leads to discoveries.

Active Collaboration & Community Engagement

Programs actively encourage researcher collaboration and community involvement through multiple channels, including financial incentives (collaboration bonuses, team-friendly bounty structures), facilitation efforts (introducing researchers with complementary skills and coordinating team efforts on complex issues), recognition of collaborative achievements, and community partnerships. Programs partner with researchers for meetups and conferences, leverage HackerOne's brand ambassador program to facilitate Club-hosted events, and support local researcher communities. This creates a culture that values both technical collaboration on findings and broader researcher community engagement.

Researcher Engagement & Growth

Practice Area

Baseline

Competitive

Community Engagement

Broad Researcher Engagement

Programs maintain consistent engagement with their full invited researcher community through regular program updates, announcements, and general incentive opportunities. Programs ensure all invited researchers have visibility into program changes, new scope additions, and available rewards, fostering an active and informed researcher pool.

Strategic Focus Campaigns

Programs run strategic campaigns with specific focus areas, targeting either particular researchers (top researchers, domain specialists, emerging talent) or specific program needs (new scope coverage, vulnerability classes, high-priority assets). Programs offer enhanced incentives, exclusive access, or personalized invitations to draw focused attention to strategic priorities, such as AI security experts for new AI initiatives, mobile specialists for app launches, or expedited rewards for specific vulnerability types.

3. Security Page & Program Setup

Bounty & Reward Structure

Practice Area

Baseline

Competitive

Examples

Severity Framework

Consistent Severity Assessment

Programs consistently use standard or clearly communicated severity frameworks, such as CVSS and HackerOne platform standards, when scoring vulnerabilities and document their scoring methodology in their guidelines. Programs use consistent notation throughout and clearly document any exceptions or alternative assessment criteria.

Non-Subjective Bounty Tables with Examples

Programs publish bounty tables that use objective criteria (vulnerability type, severity, asset criticality) rather than subjective language, with concrete examples showing what specific vulnerabilities would receive what reward levels. Programs provide detailed bounty tables with concrete vulnerability examples at each reward level and, when using custom assessment criteria beyond CVSS, clearly document their rationale and how impact translates to rewards. Advanced programs may also share specific calculation tools, formulas, or CVSS configurations that enable researchers to predict rewards with high accuracy before submission.

N/A

Impact Guidance

Clear Severity Impact Guidance

Programs explicitly define the types of vulnerabilities with the highest business impact and the scenarios or attack chains most critical to the organization.

Clear Impact-Based Payment Philosophy

Programs adopt and communicate a clear philosophy that prioritizes actual business impact over technical severity classifications when determining rewards (e.g., Does it lead to misuse of a functionality that bypasses a business function? Yes means we pay appropriately).

Baseline: "Highest impact: vulnerabilities affecting payment processing, authentication systems, or PII exposure. Attack chains reaching these areas from lower-severity entry points are rewarded at the impact level."

Competitive: "We pay on business impact, not just technical severity. If your finding bypasses a business control or enables fraud - even without traditional CVSS severity - we pay appropriately."

Scope & Asset Management

Practice Area

Baseline

Competitive

Out-of-Scope Guidelines

Maintained Out-of-Scope Guidelines

Programs document what they will not reward and update the guidelines iteratively as they learn what vulnerability types, assets, or testing activities they cannot address.

Comprehensive Out-of-Scope Guidelines

Programs proactively maintain precise out-of-scope guidelines by listing specific known out-of-scope assets rather than vague categories (e.g., actual staging URLs instead of "dev environments"), using accurate technical vulnerability terminology, and documenting known-issue assets before researchers encounter them. Programs acknowledge they may not know their complete attack surface and avoid closing reports as out of scope on assets that weren't previously listed without adding them to the guidelines for future clarity.

Scope Philosophy

Clear Scope Definition with Accessible Assets

Programs clearly define in-scope assets and vulnerability types with unambiguous boundaries, ensuring those assets are accessible without unnecessary barriers. When additional access steps or restrictions (e.g., account provisioning, georestrictions, or other parameters) are required, they are transparently documented with clear workflows and guardrails to support equitable participation.

*(See Exemplary Practice: Universal Asset Coverage)*

Volume-Based Bugs

Volume-Based Bug Guidelines

Programs adhere to HackerOne Platform Standards for handling volume-based and systemic vulnerabilities, and document any program-specific variations in their Security page.

Transparent Volume-Based Assessment

Programs adhere to HackerOne Platform Standards for volume-based and systemic vulnerabilities, and proactively communicate their interpretation and application of these standards, enabling researchers to understand the assessment criteria before submitting their findings.

Documentation Quality

Public Documentation & Resources

Programs inform researchers about existing public documentation, help centers, API documentation, and other resources that can support security research. Programs highlight relevant public materials in their Security pages and encourage researchers to reference available documentation to understand product functionality, authentication flows, and system architecture.

Treasure Maps & Internal Documentation

Programs may provide sanitized asset guides, high-level architecture overviews, or "treasure map" artifacts that help researchers understand complex systems, authentication flows, and high-value target areas, while recognizing that this practice should be balanced against regulatory, IP, and contractual constraints.

Guidelines Clarity & Structure

Practice Area

Baseline

Competitive

Guidelines Organization

Clear and Concise Guidelines Structure

Programs present their guidelines page with clear organization, logical sections, easy-to-scan formatting, and concise language, allowing researchers to quickly understand the requirements and expectations without confusion.

(No competitive practice - clear guidelines structure is baseline operational requirement)

Newly Disclosed Vulnerabilities

30-Day Cooling Period for Public Vulnerabilities

Programs accept reports of publicly disclosed vulnerabilities (including CVEs, zero-days, and N-days) and pay rewards after a 30-day cooling period from the date of public disclosure. This allows programs time to assess widespread impact and prioritize patching before accepting reports. Programs clearly document this guidelines and communicate eligibility timelines to researchers, including when to resubmit reports that arrive after the cooling period has expired.

Early Submissions

For submissions that arrive before the 30-day cooling period, programs evaluate reports against documented eligibility criteria rather than accepting or rejecting them outright. Criteria may include: demonstrated impact on program assets, confirmed exploitation or proof of concept, novel exploitation methods, and first-to-report status. Programs may decline to pay for issues already under active internal remediation, while acknowledging the researcher's independent discovery. Programs clearly communicate evaluation criteria and consistently apply them across similar reports.

4. Report Handling

Payment & Recognition Practices

Practice Area

Baseline

Competitive

Payment Speed

14-Day Payment Response Target

Programs commit to processing bounty payments within 14 days of the triggering event (typically triage or resolved, depending on program guidelines).

Pay on Triage

Programs process and issue bounty payments immediately upon validating a report as a legitimate vulnerability (triage), rather than waiting for remediation to complete.

CVE Processing & Recognition

CVE Timeline Guidelines with Disclosure Coordination

Programs establish clear timelines for handling CVE requests, including the timeframe for researchers to request CVEs, the processing time for requests, and typical timelines for CVE publication. Programs coordinate CVE issuance timing with their coordinated disclosure timeline to ensure CVE IDs are available before researchers' right to disclose.

Timely CVE Processing with Researcher Attribution

Programs combine rapid CVE processing with researcher recognition (when mutually agreed upon) by crediting researchers by name when requesting CVE identifiers, ensuring researchers receive both timely CVE assignment and professional recognition in public vulnerability databases. For programs that do not generate CVEs, equivalent recognition may include: naming researchers in patch notes, security advisories, or release notes; public Hall of Fame listings; or acknowledgement fields in public disclosures.

De-duplication Guidelines

Practice Area

Baseline

Competitive

Examples

De-duplication Logic

Consistent Root Cause Logic

Programs apply consistent, documented logic when determining whether multiple reports represent the same root cause (duplicates) or distinct issues requiring separate fixes.

Individual Value Assessment

Programs assess duplicate reports for unique value beyond the original submission, such as new attack vectors, additional evidence, or broader demonstrated impact. If the value provided by the duplicate submission leads to direct, actionable changes (such as implementing a new mitigation strategy, adjusting the remediation timeline, or identifying new attack vectors or additional required fixes), then the duplicate report should be compensated for the additional value it provides.

Operationally, the report would retain the duplicate status with a bonus awarded for additional value demonstrated.

Note: This applies to individual duplicates that don't constitute a systemic issue; for systemic or volume-based issues, follow Bounty Standards for Multiple Reports Highlighting Systemic Issues.

Baseline:

Program documents its de-duplication approach: "We consider reports duplicates when they share the same root cause, meaning the same code fix would resolve both issues. Different exploitation methods (e.g., different XSS payloads) targeting the same vulnerable parameter are duplicates. Different parameters with the same vulnerability type are separate issues."

Competitive:

"This is a duplicate of #45678, which identified a stored XSS in the comment field. While the original report demonstrated alert(1) execution, your submission shows that the same XSS can be leveraged to perform account takeover (ATO) by chaining it with a session fixation issue. Although the injection point is the same, your report demonstrates a broader impact, so we are awarding you a bonus."

De-duplication Window

De-duplication Window Guidelines

Programs document how they handle duplicate reports for unresolved vulnerabilities that persist for extended periods. Programs acknowledge in their guidelines that extended exposure represents ongoing risk and communicate whether duplicate status may change over time for long-standing issues.

Time-Bound De-duplication Window with Graduated Rewards

Programs establish a maximum time window during which reports can be marked as duplicates of previous submissions, and define their own severity-based thresholds.

Suggested thresholds:

  • Critical 90+ days unresolved,

  • High 180+ days,

  • All severities 365+ days.

After these windows expire, programs may either treat reports as new findings (full bounty) for clear, high-impact cases or award partial bounty (e.g., 50%) for cases with less certain impact. Programs document their approach in guidelines and consistently apply it.

Programs with long-standing known issues may publish these in their Security page as "known legacy findings" to set clear expectations and avoid duplicate reports on issues that cannot be promptly remediated.

Baseline:

The program documents the following in guidelines:Duplicate reports are generally marked against the original submission regardless of time elapsed. For unresolved vulnerabilities beyond our standard remediation timelines, we may reconsider assigning duplicate status on a case-by-case basis.”

Competitive:

Program implements 90-day de-duplication window for Critical/High: "Reports are eligible for duplicate status only if the original was submitted within 90 days. After 90 days, we treat reappearances as new findings, acknowledging that extended exposure represents renewed risk. For medium to low severity, the window is 180 days. Window resets if the vulnerability regresses after being fixed."

Duplicate Transparency

Sharing Original Report Context on Duplicates

When a report is marked as a duplicate, programs share relevant metadata from the original report with the researcher who submitted the duplicate (such as the submission date, general description, etc.). Programs may anonymize or limit details when sharing would breach the original submitter's trust or reveal sensitive impact information. Properly closing a duplicate report in the H1 platform handles this.

Duplicate Decision Transparency

Programs provide clear explanations of why reports are marked as duplicates, detailing the specific overlapping components, root causes, or exploitation methods that make them duplicates of earlier submissions. Rather than simply stating "this is a duplicate," programs explain the reasoning (e.g., "Both reports exploit the same authentication bypass logic in the login flow" or "This XSS uses a different payload but targets the same vulnerable parameter with the same root cause"). When the original report contains sensitive impact details, programs may provide anonymized explanations that convey the root cause without revealing specifics.

Baseline:

"This report is a duplicate of #12345, submitted on 2024-03-15. The original report identified the same IDOR vulnerability in the /api/users/{id}/profile endpoint. Both reports exploit insufficient authorization checks on user profile retrieval."

Competitive:

"This is a duplicate of #12345. Both reports exploit the same root cause: the absence of authorization middleware on the /api/users endpoint family. Your report uses the /documents path, while the original used /profile; however, both bypass the same checkUserOwnership() function that should validate the ownership of {id}. The fix (adding middleware to the route group) will resolve both vectors simultaneously."

Internal Duplicates

Evidenced Internal Duplicate Decisions

Programs mark reports as internal duplicates only when they can demonstrate that the issue was known internally prior to submission. Programs provide evidence of prior awareness through methods appropriate to their organization's guidelines, such as a written statement citing the internal tracking reference and discovery date, a sanitized summary approved for external sharing, or, where permitted, a redacted screenshot. Evidence should include a clear timestamp and reference to the same root cause.

Contextual Internal Duplicate Transparency

Beyond baseline evidence, programs explain how the researcher's report relates to the same root cause, describing the technical connection, shared vulnerable components, or why the same fix resolves both. Programs acknowledge the researcher's independent discovery.

Baseline:

The program provides screenshot evidence: "This issue was identified in our Q3 penetration test (internal reference SEC-4521, created 2024-08-12)."

Competitive:

"This was identified internally on 2024-08-12 (Jira SEC-4521). Both findings exploit the same root cause: insufficient validation of the JWT signature in the /api/auth endpoint. Your report demonstrated the bypass using a modified token header, while our internal finding used payload manipulation, but both

exploit the same signature verification gap, and the same fix (enforcing strict signature validation) resolves both. We want to acknowledge your independent discovery and thank you for your submission."

Regression & Retest Handling

Practice Area

Baseline

Competitive

Regression & Bypass Guidelines

Regression & Bypass Reward Guidelines

Programs have clear, documented guidelines for rewarding both regressions (previously fixed vulnerabilities that reappear) and bypasses of resolved reports (incomplete fixes that can still be exploited with different payloads or methods). Programs treat bypasses as new vulnerability reports and pay separate bounties for each, recognizing the effort required to validate that previous remediation was unsuccessful. Programs reward at the same severity level as the original finding, ensuring fairness and incentivizing thorough validation of fixes.

Comprehensive Bypass Bonuses & Testing Support

Programs actively encourage bypass testing by providing transparency about fix implementation (what specifically was changed, how the system works, remediation approaches used), enabling researchers to make educated decisions about testing strategies. Programs award bonuses when researchers provide comprehensive bypass information beyond basic validation, such as multiple bypass techniques, patterns revealing incomplete fix approaches, or detailed documentation that helps programs understand root causes. This combination of transparency and incentives improves overall fix quality and reduces repeated incomplete remediation attempts.

Retest Programs

Timely Retest Turnaround

Programs conduct retests (verification that reported vulnerabilities have been fixed) either internally or within a reasonable timeframe that respects the researcher context. Excessive delays are communicated proactively.

Retest Response Target Aligned with Disclosure Timeline

Programs commit to completing retests within a timeframe that supports the 90-day coordinated disclosure cycle. This ensures researchers can verify fixes and proceed with disclosure without retest delays becoming a blocker. Programs communicate retest status proactively and prioritize retests for reports approaching disclosure eligibility.

Report Lifecycle Management

Practice Area

Baseline

Competitive

Remediation Timelines

Severity-Based Remediation Response Target

Programs establish and communicate clear service-level agreements for remediation that vary by vulnerability severity, with more aggressive timelines for critical and high-severity findings than for medium- and low-severity issues. This acknowledges that critical vulnerabilities necessitate urgent attention, while lower-severity issues can be addressed through longer remediation cycles

Tiered Remediation Commitment with Exception Process

Programs commit to severity-based maximum remediation timelines. These represent upper bounds on how long researchers wait for closure

  • Critical - 90 days

  • High - 120 days

  • Medium/Low - 180 days.

Programs document an exception process for findings requiring extended remediation (e.g., architectural changes, vendor dependencies), including required evidence of progress and communication cadence with researchers.

Report State Management

Accurate Report State Tracking

Programs use Report States accurately per HackerOne platform documentation, reflecting the true status of each submission. Accurate state usage ensures the researcher reputation metrics reflect reality

Key states:

  • Informative when reports provide valid information but don't require action (out-of-scope, known issues, accepted risk).

  • Not Applicable when reports don't contain a valid reproducible issue, or security implications aren't demonstrated.

  • Spam for invalid, incomprehensible, abusive, or harassing submissions.

  • Resolved when vulnerabilities are fixed and no longer reproducible.

(No competitive practice - accurate state tracking is baseline operational hygiene)

Operational Measurement & Evidence

  • While this framework defines the standards for operational maturity, programs are encouraged to maintain internal signals that demonstrate consistent adherence to each practice. These signals are not intended as a scoring mechanism or compliance checklist, but as operational indicators that enable self-assessment, continuous improvement, and stakeholder alignment.

  • Examples of evidence include Response Target adherence metrics, documented workflows, and guidelines artifacts. Programs may choose the indicators most appropriate for their scale and operating model.

5. Exemplary Tier Practices

The Exemplary Tier represents aspirational practices that surpass those of the Competitive Tier, demonstrating best-in-class operational excellence. These practices require exceptional operational maturity, resources, or organizational capabilities and are not expected as baseline or competitive requirements.

Exemplary Practice #1: Creative Engagement Bonuses

Category: Security Page & Program Setup > Rewards & Incentives

Programs offer unexpected bonuses for creative submissions, community contributions, or exceptional researcher engagement beyond standard bounty payments (first-find bonuses, quality bonuses, exceptional write-up bonuses). These bonuses build program personality and create memorable experiences that foster emotional connection and loyalty.

Implementation Guardrails:

  • Discretionary, not expected: Bonuses are discretionary and unexpected; programs should not create entitlement by communicating bonus criteria that researchers can "game."

  • Communication: When awarding bonuses, frame them as recognition for exceptional work rather than standard practice.

Implementation requires mature baseline/competitive operations, a discretionary budget, organizational buy-in for non-standard payments, and the guardrails above.

Exemplary Practice #2: Rapid Payment for Validated Zero-Days

Category: Security Page & Program Setup > Guidelines Clarity & Structure

Programs pay rewards for newly disclosed vulnerabilities once they have been confirmed as issues and an acknowledged patch or mitigation is available. Report eligibility criteria: (1) impact on program assets is demonstrable, (2) exploitation is verified or confirmed, and (3) the program is not already actively remediating the issue. Programs do not pay for zero-days they are already addressing internally, but may acknowledge independent discovery. This demonstrates exceptional commitment to rapid vulnerability awareness while maintaining quality controls and avoiding perverse incentives. Implementation requires significant operational capacity, mature triage processes, budget flexibility, robust deduplication tracking to avoid duplicate payments for widely reported CVEs, and clear internal escalation for active remediation conflicts.

Exemplary Practice #3: Universal Asset Coverage

Category: Security Page & Program Setup > Scope & Asset Management

Programs adopt an expansive scope philosophy that includes all internet-facing assets owned by the organization. The principle: if it's on the internet and we own it, it's in scope. This requires exceptional asset inventory capabilities, organizational buy-in across all business units, and triage capacity to handle reports on any owned asset.

Did this answer your question?