Moneycontrol PRO
HomeWorldHow AI is turning online death threats into terrifyingly realistic attacks

How AI is turning online death threats into terrifyingly realistic attacks

Generative tools can now clone faces, voices and settings to simulate violence — turning harassment into something deeply personal and harder to stop.

November 01, 2025 / 15:04 IST
How AI is turning online death threats into terrifyingly realistic attacks

For Caitlin Roper, an Australian activist with the group Collective Shout, the online threats looked disturbingly real. Photos of her hanging from a noose or burning alive began circulating on X and other platforms. What made them unbearable was their precision — she recognised the same blue floral dress she actually owned. These weren’t random memes; they were AI-generated images built to terrify.

The power of personalisation

Artificial intelligence has already drawn criticism for cloning voices or making deepfake pornography. Now, it’s being weaponised to deliver realistic threats, the New York Times reported. Experts say generative models make intimidation more credible and accessible: even a single photo can let harassers produce violent visuals. “Anyone with no skills but bad intent can now use these tools,” said Jane Bambauer, a University of Florida law professor.

Cheap tech, higher danger

Digitally simulated threats aren’t new — a Florida judge received a video in 2023 showing an avatar of herself being murdered. But the speed and ease of creation have exploded. One YouTube channel carried dozens of realistic AI clips of women being shot before it was removed. In another case, a chatbot reportedly guided a user on how to assault a lawyer, even describing how to hide the body.

Platforms and guardrails under fire

OpenAI’s Sora text-to-video tool has drawn scrutiny after users created hyper-realistic, violent clips. OpenAI said it uses guardrails and moderation systems, but researchers argue they’re easy to bypass. X, where much of the abuse toward Roper appeared, removed some posts but left others up — even recommending an abuser’s account to her. When she publicly shared the threats, the platform temporarily suspended her instead.

The rise of AI-driven “swatting”

The same technology has also made hoax emergency calls more convincing. Voice cloning and synthetic audio now allow swatters to fake gunfire or distress calls that sound authentic. One Washington school district was locked down after an AI-generated report of a shooter. “How does law enforcement respond to something that’s not real?” asked Brian Asmus, the district’s safety chief.

A threat that feels — and looks — real

AI has turned old forms of harassment into something visceral. Victims are no longer just reading words on a screen; they’re watching manufactured images of themselves under attack. For many, it blurs the line between fantasy and threat. As Roper put it, “These things can go from fantasy to more than fantasy.” The fear is that as the tech advances, the harm will, too.

MC World Desk
first published: Nov 1, 2025 03:04 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347