Good Faith security research empowers us all to build a safer internet, and those who ethically disclose the vulnerabilities they find and the organizations that responsibly act upon their research should do so without threat of legal action or regulatory sanction.
Unfortunately, many existing anti-hacking laws are outdated and overly broad, raising the possibility that even Good Faith Security Researchers engaging in ethical vulnerability disclosure could face legal liability. Further, uncertainty exists about what exactly constitutes a reportable “data breach” under some privacy laws.
This lack of clarity in the law makes it essential that any organization engaging the researcher community makes a clear, unambiguous statement that it considers Good Faith Security Research (see definition below) to be authorized activity that is protected from legal action by them. A comprehensive statement authorizing Good Faith Security Research may also help differentiate independent validation from data breaches under some privacy laws. This type of statement is often referred to as “safe harbor.”
As the leader in Continuous Threat Exposure Management and the host of the world’s largest community of researchers, HackerOne provides two safe harbor statements. These Safe Harbors establish clear authorization frameworks for both traditional security research and AI system research. HackerOne believes safe harbor is a necessary first step for any vulnerability disclosure, bug bounty, or AI security testing program:
Gold Standard Safe Harbor (GSSH) - A long-standing framework that protects researchers performing Good Faith Security Research on traditional systems such as web applications, APIs, infrastructure, and software.
AI Research Safe Harbor (AI RSH) - A new safe harbor that extends protections to Good Faith AI Research. It covers testing behaviors unique to AI systems, such as probing model behavior, unintended outputs, safety bypasses, or robustness issues.
Programs may adopt either Safe Harbor on its own or enable both together.
How to Enable Safe Harbors (Program Owners & Admins)
Safe Harbors can be enabled in the Program Settings under:
Customizations → Overview → Safe Harbor
You will see two independent options:
Gold Standard Safe Harbor - Enabled by default for all new programs. If you wish to opt out of this safe harbor, please contact your CSM.
AI Research Safe Harbor: Select Yes (Recommended) to adopt it. Once enabled, the option is locked and can only be disabled by your CSM.
When you enable a Safe Harbor
Your Program Highlights page displays a Safe Harbor badge visible to Community Members with a link to the guidelines
The Safe Harbor tab displays the relevant guidelines text for each Safe Harbor you adopt
Adopting a Safe Harbor does not change the program scopes. Scope definitions remain based on what assets the program explicitly includes.
What is safe harbor?
A “safe harbor” is a provision that offers protection from liability in certain situations, usually when certain conditions are met. In the context of security and AI research and vulnerability disclosure, it is a statement from an organization that security and AI researchers engaged in Good Faith Security Research and ethical disclosure are authorized to conduct such activity and will not be subject to legal action from that organization.
What is Good Faith Security Research?
HackerOne considers Good Faith Security Research to be accessing a computer solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability, where such activity is carried out in a manner designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices, machines, or online services to which the accessed computer belongs, or those who use such devices, machines, or online services. Those engaged in Good Faith Security Research are sometimes called “bona fide” researchers or “white hat” or “ethical” hackers.
Security research not conducted in good faith is not covered by safe harbor. For example, research conducted for the purpose of extortion is not in good faith. To the extent possible, researchers should seek to clarify the status of conduct that is borderline or they think may be inconsistent with Good Faith Security Research or unaddressed in the program's guidelines before engaging in such conduct. If there is a disagreement over whether or not given research is in good faith, organizations and researchers should look to common security research best practices.
As of January 1, 2026, the GSSH, including the concept of Good Faith Security Research, is aligned with recent legal and regulatory developments and current best practices represented by (among others):
Recommendations in the EU Agency for Cybersecurity (ENISA) report on Coordinated Vulnerability Disclosure Policies in the EU (April 2022)
Recommendations in the Organization for Economic Co-operation and Development (OECD) paper Encouraging Vulnerability Treatment: Overview for Policy Makers (February 2021)
As of January 1, 2026, the AIRSH is aligned with recommendations to explicitly expand the concept of safe harbor to AI systems research, including:
Senate Intel Chairman Urges U.S. Copyright Office to Expand Good-Faith Security Research Exemption to Include AI Safety
Recommendations to exempt AI research from DMCA Sec. 1201
In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI
A Safe Harbor for AI Researchers: Promoting Safety and Trustworthiness Through Good-Faith Research
Why is safe harbor fundamental to security research and vulnerability disclosure?
Safe harbor is a baseline requirement to engage with researchers in good faith. Outdated and overly broad anti-hacking laws create uncertainty. By creating a clear statement protecting Good Faith Security Research from legal action, organizations take the first step toward engaging with the researcher community.
A short, broad, easily-understood safe harbor statement provides researchers with assurance and a binding commitment that they will not face legal risk merely for making valuable contributions to an organization’s security.
Safe harbor is recommended by the U.S. Department of Justice in the Framework for a Vulnerability Disclosure Program for Online Systems and the Cybersecurity and Infrastructure Security Agency (CISA) in the Vulnerability Disclosure Policy Template for U.S. government agencies, championed by legal and infosec experts industry-wide in projects like the disclose.io, and already provided by all top-tier security programs and generally most organizations running a vulnerability disclosure program. Examples of top-tier security programs across a variety of industries providing safe harbor include the UK Ministry of Defence, General Motors, John Deere, and the United States Postal Service.
Does safe harbor help protect organizations?
Yes! In addition to the general benefits of creating a more solid foundation for researchers’ engagement with you, a clear, unambiguous authorization statement may help clarify the distinction between access during Good Faith Security Research and a reportable data breach under some privacy laws. Uncertainty exists about what exactly constitutes a reportable “data breach” under some privacy laws, and many privacy laws recognize a distinction between authorized and unauthorized activity.
This applies to both traditional systems (GSSH) and AI systems (AI RSH).
Beyond protection from legal action, what are additional important elements of a leading-edge safe harbor statement?
First, safe harbor should apply by default to all Good Faith Security Research ethically disclosed to an organization. Tying safe harbor to the acceptance of certain terms or guidelines (often at the time of vulnerability submission) can create uncertainty about the status of Good Faith Security Research undertaken prior to the submission of a vulnerability report. Influenced by guidance from the U.S. Department of Justice and other regulators, multinational organizations, and industry partners, a leading-edge safe harbor statement should unambiguously protect Good Faith Security Research whenever such conditions are met.
Second, whether or not a particular action is inconsistent with Good Faith Security Research should not be unilaterally determined by an organization. Good Faith Security Research is a standard that should be applied as consistently as possible, and a researcher or an organization’s initial instinct about a particular action may not accurately reflect the standard. Organizations and researchers should seek to mutually agree on whether a particular action constitutes Good Faith Security Research. If the two parties are unable to agree, they should look to best practices and standards.
Finally, safe harbor may not be removed retroactively. Once safe harbor applies to a particular instance of Good Faith Security Research, there should not be a threat that it might be removed if there is later a disagreement between the researcher and the organization. Obviously, this does not apply if there is clear evidence of bad faith activity--though, in that case, safe harbor would not have been applicable.
Why is the standardization of safe harbor statements an important goal?
Safe harbor is a baseline requirement for engaging in good faith with the security research community. Just as standardized licensing models such as Creative Commons and widely used open source licenses have strengthened creative and open source ecosystems, standardized safe harbor statements offer similar benefits for security research. Standardization reduces the burden on researchers to interpret many different guidelines and increases confidence when conducting ethical hacking and vulnerability disclosure. It also signals an organization’s collaborative posture, which is a strong indicator of cybersecurity maturity. In addition, standardization lowers barriers to entry for both researchers and organizations, since legal and guidelines concerns around vulnerability disclosure are largely consistent and well addressed by established safe harbor language. Finally, adopting a well-tested safe harbor improves clarity and consistency. In the past, unclear or contradictory statements have created confusion and, in some cases, risked conflict with applicable laws or regulations, undermining the protections they were meant to provide.
Is the adoption of the Safe Harbor statement a big change?
No. The updated language reflects new guidance from regulators and industry experts on Good Faith Security Research and an emerging consensus that good-faith AI systems research should be broadly protected, even if it does not constitute traditional security research. It represents a renewed push to further standardize safe harbor for vulnerability disclosure programs, but we also believe that many programs' practices already align with the intention of the Safe Harbor.
What if we don’t want to adopt a Safe Harbor statement?
We strongly encourage the use of the Safe Harbor statements. As adoption becomes widespread, researchers, your customers, and potentially even regulators will come to expect the protections and cybersecurity maturity that come with adopting and adhering to a safe harbor statement. Researchers in particular may choose to be biased towards programs that offer industry best practice protections, represented on HackerOne by the program level badge on program pages.
Legal Disclaimer
This does not constitute legal advice. The materials available on this website are for informational purposes only and not for the purpose of providing legal advice, and the suitability of any of the information provided or the sample Safe Harbor statement may vary based on your or your organization’s circumstances. In accordance with the terms of use for HackerOne Services and Platform, your organization is wholly responsible for the contents of your organization’s program guidelines.
