Back to all articles

HackerOne Pauses Internet Bug Bounty to Address AI-Driven Remediation Imbalance

The rapid adoption of AI in vulnerability research has significantly increased the volume of security reports, straining the capacity of open-source maintainers. In response, industry leaders are reevaluating vulnerability programs to better support triage and fund remediation efforts.

Triage Security Media Team
3 min read

HackerOne recently paused new vulnerability submissions to its crowdsourced Internet Bug Bounty (IBB) program, surfacing a systemic challenge in the software industry: the growing disparity between AI-assisted vulnerability discovery and the capacity of open-source maintainers to remediate those findings.

Operating since 2013, the IBB serves as a primary vulnerability reward program for the open-source ecosystem. On March 27, HackerOne suspended new submissions, citing a significant imbalance between the volume of reported vulnerabilities and the available resources for maintainers to process and patch them.

Signal versus noise in automated reporting

"The discovery situation is changing. AI-assisted research is expanding vulnerability discovery across the ecosystem, increasing both coverage and speed," HackerOne announced. The organization noted that the balance between findings and remediation capacity has shifted substantially, requiring a reassessment of the structure and incentives of crowdsourced programs like the IBB.

Following the IBB suspension, maintainers of the open-source Node.js project paused their own security reward program due to the loss of IBB funding. As a volunteer-driven project, Node.js maintainers explained they lack an independent budget to sustain monetary rewards.

Security practitioners view this shift as a predictable outcome of integrating AI into vulnerability research. Ensar Seker, chief information security officer at SOCRadar, describes the pause as a rational correction to how vulnerability ecosystems operate under the pressure of automated generation.

"HackerOne is essentially acknowledging that the bottleneck has shifted: discovery has been industrialized by AI, but remediation capacity has not scaled accordingly," Seker says. When automated tools generate thousands of low- to medium-quality findings in a short period, volunteer maintainers with limited funding quickly reach capacity. Seker adds that the pause is an attempt to rebalance signal versus noise rather than a reduction in security commitment.

The impact on validation and triage

The increase in automated submissions has directly affected the validation process. John Morello, co-founder and chief technology officer of Minimus, notes that the rate of valid submissions dropped from approximately 15% to below 5% as triage queues filled with low-quality automated reports.

"AI-assisted hunting hasn't necessarily found more critical zero-days; instead, it's shifted the bottleneck entirely to validation, forcing triage teams to wade through thousands of plausible-sounding but non-exploitable reports," Morello says.

For open-source maintainers, this validation bottleneck results in "triage fatigue," consuming development hours to disprove hallucinated vulnerabilities. "The current bounty model unfortunately rewards quantity over depth, effectively weaponizing unpaid labor and forcing these small teams to act as a free quality assurance department for every automated scanner on the planet," Morello notes.

Balancing discovery and remediation

HackerOne is currently evaluating new approaches with project maintainers and researchers to align incentives and ensure vulnerability discoveries lead to durable security improvements.

Trey Ford, chief strategy and trust officer at Bugcrowd, views the situation as an indicator that the industry has spent years optimizing the wrong end of the security pipeline. AI successfully compressed the time required to find vulnerabilities, but the operational challenge of a maintainer receiving 40 valid reports with limited time to respond remains unsolved.

Because AI lowers the barrier to initial discovery, raw volume no longer offers a competitive advantage for researchers. Ford anticipates that value will increasingly shift toward identifying complex logic flaws and novel sequences of actions that require human depth and contextual judgment. "The next generation of vulnerability programs may offer bonuses to researchers for bringing fixes, not just reporting vulnerabilities, and create shared pools that fund both the researcher who finds and the maintainer team that ships the patch," he says.

Reward programs originally designed around human-paced research are also depleting funds faster than anticipated. David Hayes, VP of product at FusionAuth, notes that the current model requires structural changes to remain sustainable. Programs were built for an environment where discovery was the primary bottleneck. Now that discovery is heavily automated, the bottleneck is remediation—a phase that traditional programs do not fund.

"The projects that underpin critical Internet infrastructure can't rely on volunteer labor to process AI-generated reports at scale," Hayes says. "The industry needs to figure out how to fund the fix, not just the find."


Original reporting by Jai Vijayan, a technology reporter with over 20 years of experience covering information security, data privacy, and data analytics.