FBI Alerts Public to Ongoing Scam Utilizing Deepfake Audio to Mimic Government Officials

The Federal Bureau of Investigation (FBI) has issued a critical alert regarding an ongoing malicious messaging operation that employs AI-generated voice technology to impersonate senior government officials. This campaign aims to deceive recipients into clicking on links that may compromise their devices.

Since April 2025, cybercriminals have been impersonating high-ranking U.S. officials, targeting individuals who are predominantly current or former federal and state government officials and their contacts. This advisory from the FBI’s Internet Crime Complaint Center emphasizes that recipients should verify the authenticity of messages misrepresenting themselves as communications from senior officials.

The perpetrators of this campaign are leveraging deepfake technology, which generates highly convincing audio messages that closely mimic the voice and speech patterns of specific individuals. The subtlety of these AI-created deepfakes makes it exceedingly difficult for individuals to differentiate between genuine and fabricated communications without specialized analysis.

One tactic employed by the attackers involves requesting that conversations transition to a different messaging platform. This maneuver is strategically designed to gain the trust of the target, leading them to mistakenly click on a malicious link purported to facilitate the switch to the new platform. The advisory refrained from providing granular details about the campaign, but the use of deepfakes represents a notable escalation in the sophistication of phishing schemes.

This warning coincides with an uptick in incidents involving deepfake audio and video technologies in various fraudulent and espionage efforts. For instance, LastPass was targeted last year by a multifaceted phishing scheme that combined emails, texts, and voice calls in an effort to extract users’ master passwords. A notable aspect of this attack included a deepfake audio call impersonating CEO Karim Toubba to manipulate a LastPass employee.

In a related incident, a robocall initiative directed at New Hampshire Democrats featured a deepfake voice of then-President Joe Biden. This operation led to criminal charges against a Democratic consultant, while the telecommunications provider responsible for disseminating the misleading robocalls consented to a $1 million civil penalty for failing to authenticate the caller, as dictated by FCC regulations.

The potential adversary tactics in these incidents align with the MITRE ATT&CK framework. Techniques such as initial access through deceptive communications, persistence by establishing a foothold on targeted devices, and privilege escalation via social engineering are particularly relevant in understanding the operational methods of these attackers. Business owners should remain vigilant and informed about such sophisticated threats, as the landscape of cybersecurity continues to evolve with the introduction of advanced technologies like deepfakes.

Source