An individual using artificial intelligence to impersonate US Secretary of State Marco Rubio successfully contacted several high-ranking officials last month, including three foreign ministers, a US governor, and a member of Congress, according to a State Department cable obtained by The Washington Post. The alarming incident highlights how AI-powered tools are enabling sophisticated deception campaigns that target senior government figures.
The impersonation began in mid-June, when someone created a Signal account — a secure messaging app widely used across the Trump administration — using the display name “Marco.Rubio@state.gov.” Though not an official government email address, the display name lent a sense of legitimacy to the messages. The impostor used both text and AI-generated voice messages in an effort to lure officials into responding.
According to the State Department cable dated July 3, voicemails were left for at least two individuals, and others received invitations to continue the conversation on Signal. US authorities believe the impersonator was likely trying to gain access to sensitive information or secure entry to protected accounts by exploiting the trust placed in high-profile public officials. So far, officials have not publicly identified the targeted individuals or confirmed whether any responded.
No confirmation on who’s behind it
US officials have not yet identified the perpetrator. The FBI declined to comment on the case, but a State Department spokesperson confirmed that a formal investigation is underway. The cable urged diplomatic personnel to report any future impersonation attempts to the Bureau of Diplomatic Security. Non-State Department individuals were advised to alert the FBI’s Internet Crime Complaint Center.
The episode comes on the heels of other high-level impersonation cases. In May, a hacker breached the phone of White House Chief of Staff Susie Wiles and contacted several senators and business leaders under her name. That incident prompted a White House investigation, but President Donald Trump downplayed the breach, calling Wiles “an amazing woman” who could “handle it.”
AI makes impersonation easier than ever
Experts say that the ease of using generative AI tools makes such impersonation attempts increasingly common and difficult to detect. Hany Farid, a professor at the University of California, Berkeley, and a specialist in digital forensics, noted that bad actors now need only a few seconds of public audio to convincingly replicate a person’s voice.
“You just need 15 to 20 seconds of audio of the person, which is easy in Marco Rubio’s case,” said Farid. “You upload it to any number of services, click a button that says ‘I have permission to use this person’s voice,’ and then you type what you want him to say.”
He added that leaving voicemails is especially effective, because there’s no real-time interaction to reveal inconsistencies.
Use of Signal under scrutiny
The incident is also raising questions about the Trump administration’s widespread use of Signal for official and personal communications. In March, then-National Security Adviser Michael Waltz accidentally added a journalist to a Signal group chat discussing classified US military operations in Yemen — a mistake that contributed to his removal from office. Despite that scandal, many government officials continue to use the app due to its end-to-end encryption and reliability.
Farid criticized this practice, saying, “This is precisely why you shouldn’t use Signal or other insecure channels for official government business.”
A growing global concern
The US is not alone in facing this threat. In June, Ukrainian authorities said Russian intelligence was impersonating its security agency in attempts to recruit civilians for sabotage missions. The same month, Canadian officials warned of AI-driven impersonation scams targeting senior government leaders to steal data or install malware.
The FBI issued a warning in May noting an “ongoing malicious text and voice messaging campaign” aimed at senior US officials. The campaign, the agency said, appeared designed to extract sensitive information or money, and frequently relied on AI-generated messages.
“If you receive a message claiming to be from a senior US official,” the FBI warned, “do not assume it is authentic.”
As the tools used to impersonate public figures become more accessible, officials are urging a shift in how secure communications are conducted — with more robust verification protocols and less reliance on informal channels.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.