Cybercriminals are abusing Gamma, an AI-powered presentation platform, to carry out phishing campaigns that redirect users to fake Microsoft login portals. Researchers at Abnormal Security revealed that attackers are using the tool to create deceptive presentations. Tricking victims into revealing login credentials.
The multi-stage campaign begins with phishing emails, often sent from compromised but legitimate email accounts, to increase trust and reduce suspicion. These emails include PDF attachments, which are not typical documents but hyperlinks. Clicking the attachment opens a Gamma-hosted presentation.
The presentation includes a call-to-action button labeled “Review Secure Documents.” Once clicked, it takes users to a fake Microsoft splash page that mimics an official Cloudflare Turnstile verification step. This tactic adds a layer of legitimacy and evades automated email scanners.
After completing the fake CAPTCHA, users are redirected to a phishing page disguised as a Microsoft SharePoint login portal. This fake page captures the victim’s credentials. If users input incorrect information, the site returns a real-time error message. Indicating that attackers are likely using an adversary-in-the-middle (AiTM) approach to validate credentials on the spot.
Gamma Attackers Leverage LOTS Strategy for Evasion
This phishing scheme reflects a growing cybersecurity trend known as “living-off-trusted-sites” (LOTS). It refers to attackers using legitimate platforms to host malicious content and bypass traditional email defenses like SPF, DKIM, and DMARC.
“Rather than linking directly to a credential-harvesting site, the attackers guide users through several steps,” said Abnormal Security. These steps include Gamma-hosted content, a Cloudflare-protected splash page, and a spoofed Microsoft login portal.
This multi-stage approach makes it harder for static link analyzers to detect malicious intent, as no single link appears obviously harmful. It also reduces the likelihood of detection by endpoint and email security solutions.
Microsoft’s latest Cyber Signals report reinforces the threat. The company highlights the increased use of AI to scale fraud operations. Tactics now include deepfake videos, cloned voices, fake websites, AI-written phishing messages, and even synthetic product reviews.
AI also helps attackers gather personal information from public sources to create targeted social engineering lures. Fake job listings, AI-generated e-commerce sites, and fabricated testimonials are just some of the tactics used to build trust and mislead users.
Storm-1811 Shifts Tactics with Teams Phishing and PowerShell Malware
One campaign mentioned in the report involves Storm-1811 (aka STAC5777), a cybercrime group exploiting Microsoft Quick Assist. The group poses as IT support via Microsoft Teams voice phishing (vishing). Once trust is gained, victims are manipulated into granting remote access, paving the way for ransomware deployment.
Researchers from ReliaQuest have observed new behavior suggesting Storm-1811 may be evolving. In recent incidents, the attackers used TypeLib COM hijacking for persistence and deployed a custom PowerShell backdoor to evade detection.
The malware has been in development since January 2025, with early versions spread through malicious Bing ads. By March, the activity shifted toward executive-level employees, particularly those with female-sounding names, in industries such as finance and scientific research.
The phishing messages were timed to arrive between 2:00 p.m. and 3:00 p.m., exploiting a potential afternoon dip in employee vigilance. Researchers believe this suggests either Storm-1811 is refining its methods, a splinter group has emerged, or a new actor is copying the group’s techniques.
Whether or not the activity is linked to ransomware group Black Basta, the trend is clear: phishing campaigns using Microsoft Teams are increasing in frequency and sophistication. Attackers are adapting quickly, finding new ways to evade defenses and maintain access within organizations.