OpenAI Launches New Grants for AI Cyber Research

OpenAI Launches New Grants for AI Cyber Research OpenAI Launches New Grants for AI Cyber Research
IMAGE CREDITS: GETTY IMAGES

OpenAI has stepped up its security efforts, rolling out major updates across its Cybersecurity Grant Program, bug bounty offerings, and internal AI defense strategies. Announced on March 26, the company aims to not only bolster the safety of its models but also encourage researchers to help tackle emerging cybersecurity threats in the age of AGI.

Originally launched two years ago, OpenAI’s Cybersecurity Grant Program has already supported 28 research projects. Now, it’s opening the door to even more ideas. The company is inviting fresh proposals across a wider range of focus areas—software patching, model privacy, incident detection and response, security infrastructure integration, and a fast-emerging area called agentic security, which deals with the unique risks associated with autonomous AI agents.

To help researchers hit the ground running, OpenAI will offer microgrants in the form of API credits. These are designed for quickly testing out novel cybersecurity concepts without the overhead of full-scale funding. The goal? Speed up experimentation and innovation at the grassroots level.

On the bug bounty front, OpenAI is making some noise. The maximum payout for top-tier, critical discoveries is now $100,000—five times higher than its previous limit. That change signals a serious shift in how OpenAI wants to attract top security talent. Launched in April 2023 on the Bugcrowd platform, OpenAI’s bounty program has so far rewarded 209 valid submissions. It’s not just about catching bugs—it’s about building trust and ensuring the integrity of OpenAI’s systems at scale.

Michael Skelton, VP of Operations at Bugcrowd, praised OpenAI’s early and ongoing commitment. “They launched with one of the strongest public bug bounty programs we’ve seen. Their proactive approach has driven sustained interest and high-quality contributions from the community.”

In a move to draw in even more researchers, OpenAI will roll out limited-time bounty bonuses. These promotions will reward participants who submit qualifying reports during specific windows, particularly in high-priority categories.

The push for more robust security also comes at a time when AI rivals have faced harsh scrutiny. “The bigger bounties show OpenAI means business,” said Stephen Kowski, field CTO at SlashNext Email+ Security. “Competitors like DeepSeek have been caught off guard by major security lapses. OpenAI is signaling it wants to prevent those same failures by incentivizing elite talent to find issues before bad actors do.”

But bug bounties are just one part of a broader, layered defense strategy OpenAI is now pursuing. As part of a new internal initiative, the company is doubling down on AI-powered defenses and bringing in outside help. One major partnership includes SpecterOps, a security firm specializing in continuous red teaming—an aggressive method for stress-testing systems against real-world threats.

Other moves include new tactics to block prompt injection attacks, expanded hiring of in-house security engineers, and upgrades to the core security infrastructure supporting OpenAI’s AGI efforts. All of this reflects the growing pressure on AI companies to stay ahead of sophisticated adversaries and mitigate risks before they spiral.

With these updates, OpenAI is positioning itself not just as a builder of next-gen AI, but also as a leader in securing it.

Share with others

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

Follow us