FBI Warns of AI Deepfake Attacks on US Leaders

GetReal Secures $17.5M to Stop Deepfakes in Their Tracks GetReal Secures $17.5M to Stop Deepfakes in Their Tracks
IMAGE CREDITS: SHUTTERSTOCK

The FBI has issued a stark warning about a malicious campaign using generated AI deepfake to impersonate senior U.S. government officials and trick victims into handing over sensitive information. The alert, which targets former senior federal and state officials, outlines a sophisticated operation using smishing (SMS phishing) and vishing (voice phishing) tactics.

According to the FBI, threat actors are leveraging AI-generated voice clips and fraudulent text messages to impersonate trusted public figures or close personal contacts. These deceptive messages are designed to gain victims’ trust and lure them into clicking malicious links or switching to other platforms—where attackers attempt to steal login credentials or install malware.

Smishing and Vishing With a Deepfake Twist

In the reported incidents, attackers send text messages impersonating family members or associates, urging victims to follow up on another platform. Once engaged, they follow up with AI-generated voice messages that sound eerily like public figures or familiar voices.

“The attackers use these methods to gain access to victims’ personal or professional accounts,” the FBI warned. “Once inside, they can impersonate the victim to target additional contacts or steal sensitive data.”

The stolen data and contact lists may then be used for further impersonation attacks, escalating the campaign across wider networks.

The FBI cautions: “If you receive a message claiming to be from a senior U.S. official, do not assume it is authentic.”

FBI’s Safety Recommendations

To protect against these evolving threats, the FBI shared a list of security precautions individuals—particularly former or current officials—should follow:

  • Verify identities: Before responding to a message or voice call, research the phone number, name, or organization. Be suspicious of urgent or unusual requests.
  • Inspect messages: Watch for small errors in emails, texts, and deepfake content—misspelled names, inconsistent grammar, unusual tone, or visual distortions.
  • Be cautious with links: Avoid clicking on unknown links or opening attachments from unverified senders.
  • Use multi-factor authentication (MFA): Strengthen online account protection to prevent unauthorized access.
  • Avoid sharing sensitive data: Never share private or financial details with unknown individuals or platforms.
  • Set up secret phrases: Create a family-safe word or passphrase to confirm the identity of anyone claiming to be a loved one in distress.
  • Report suspicious activity: If a message or call seems suspicious, contact the proper authorities immediately.

The use of deepfakes in cybercrime is on the rise. From voice cloning to AI-generated video impersonations, threat actors are exploiting advanced tools to increase the believability of phishing and social engineering attacks.

While this campaign appears targeted at high-profile individuals, anyone could fall victim to deepfake-enhanced scams. As AI tools become more accessible, personal and professional networks are increasingly vulnerable to trust-based deception tactics.

The FBI’s message is clear: stay skeptical, stay alert, and double-check before you respond.

Share with others

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

Follow us