Artificial Intelligence & Machine Learning,
Next-Generation Technologies & Secure Development
Bug Bounty Program Maximum Payout Increased From $20,000 to $100,000

OpenAI has launched a comprehensive cybersecurity initiative designed to bolster the resilience of its large language models by incentivizing the identification of critical vulnerabilities. This move is a response to increasing cybersecurity threats and the need for robust threat mitigation strategies.
Under the leadership of CEO Sam Altman, OpenAI has significantly raised the upper limit of its bug bounty program from $20,000 to an impressive $100,000. This enhancement, effective immediately, aims to attract cybersecurity researchers to report exceptional and differentiated critical findings. The program, which began last April, includes a special limited-time bonus incentive that doubles rewards for specific reports, offering up to $13,000 for vulnerabilities related to priority access control.
This promotional initiative, which commenced on March 26 and will continue through April 30, also revises the payout ranges for IDOR access control vulnerabilities. The minimum bounty has increased from $200 to $400, while the maximum reward has escalated from $6,500 to $13,000 for certain targeted vulnerabilities. These measures reflect OpenAI’s commitment to enhancing its security posture against potential exploits.
In addition to the bug bounty program, OpenAI has also expanded its Cybersecurity Grant Program, which has previously funded 28 research projects focusing on emerging threats such as prompt injection attacks, secure code generation, and autonomous defenses. The expanded program seeks proposals on new research areas, including software patch management, model privacy, threat detection and response, and the resilience of AI agents to sophisticated attack methodologies.
OpenAI is further collaborating with cybersecurity firm SpecterOps for a red teaming initiative aimed at simulating adversarial attacks within its corporate, cloud, and production environments. This proactive approach is critical for identifying vulnerabilities before they can be exploited by malicious actors, with continuous testing expected to yield valuable insights for fortifying AI systems against threats like prompt injection and unauthorized manipulation.
The organization is actively working with academic, governmental, and commercial entities to enhance AI capabilities in identifying and mitigating software vulnerabilities. As research outcomes are developed, OpenAI plans to share findings with relevant open-source communities, thus contributing to the broader cybersecurity landscape and its resilience.