A newly discovered flaw in ChatGPT infrastructure has already become an active attack vector, as cybercriminals race to exploit the vulnerability, now tracked as CVE-2024-27564.
Security researchers at Veriti uncovered the server-side request forgery (SSRF) flaw, which allows attackers to redirect users to malicious websites directly from inside the ChatGPT platform. Shockingly, within just one week, Veriti recorded over 10,000 exploit attempts from a single malicious IP address.
Although the CVSS score rates this as a medium severity vulnerability (6.5), Veriti warns that the real-world risks are far more serious—especially as attackers waste no time leveraging the flaw.
The vulnerability stems from a file named pictureproxy.php within ChatGPT’s infrastructure. By injecting crafted URLs into the chatbot’s parameters, attackers can force ChatGPT to make unauthorized server requests on their behalf.
Once triggered, the exploit enables threat actors to redirect users to phishing sites, steal sensitive information, or even probe internal systems for weaknesses.
Veriti’s analysis shows that U.S. financial institutions are the primary targets, but attacks have also hit organizations in Germany, Thailand, Indonesia, Colombia, and the UK. A demo video explaining the attack is now circulating on YouTube, raising fears about how quickly cybercriminals can weaponize this vulnerability.
So far, 33% of the attack attempts have targeted U.S. organizations—many in the financial services sector, where banks increasingly rely on AI-powered tools and APIs. This growing dependence on AI makes them highly vulnerable to SSRF attacks like CVE-2024-27564.
“If successful, attackers could trigger unauthorized transactions, steal customer data, or expose financial institutions to regulatory fines and reputational damage,” Veriti warned.
Healthcare and government agencies have also landed in attackers’ crosshairs, proving that critical sectors are far from immune as generative AI systems become deeply integrated into business operations.
Veriti’s researchers didn’t mince words: “This vulnerability is already a real-world attack vector, proving that severity scores don’t always reflect actual risk. No weakness is too small for attackers to exploit.”
Their warning aligns with findings from a 2024 SentinelOne report, which highlighted the security risks of generative AI platforms. According to SentinelOne, AI systems like ChatGPT can leak sensitive data, expose user instructions, and serve as prime targets for cybercriminals.
CVE-2024-27564 now offers a concrete example of these AI-driven risks materializing at scale.
Urgent Action Required: How Companies Can Defend Against the Exploit
Veriti is urging organizations to act fast to protect their systems against this growing threat. Their recommendations include:
- Reviewing firewall, web application firewall (WAF), and intrusion prevention system (IPS) configurations to block known exploit attempts.
- Monitoring logs for suspicious activity linked to the list of malicious IP addresses identified in recent attacks.
- Prioritizing AI security as part of ongoing risk assessments to patch potential vulnerabilities early.
As businesses rush to adopt AI tools like ChatGPT, cybercriminals are already adapting, finding new ways to exploit these platforms. The rapid rise of generative AI is opening fresh attack surfaces—making robust AI-specific cybersecurity strategies a necessity, not an option.
The exploitation of CVE-2024-27564 is a wake-up call: no AI platform is immune to threats, no matter how sophisticated. Enterprises must now treat AI systems like any other critical infrastructure—worthy of continuous monitoring, hardened defenses, and proactive risk management.