For Caitlin Roper, an Australian activist with the group Collective Shout, the online threats looked disturbingly real. Photos of her hanging from a noose or burning alive began circulating on X and other platforms. What made them unbearable was their precision — she recognised the same blue floral dress she actually owned. These weren’t random memes; they were AI-generated images built to terrify.
The power of personalisationArtificial intelligence has already drawn criticism for cloning voices or making deepfake pornography. Now, it’s being weaponised to deliver realistic threats, the New York Times reported. Experts say generative models make intimidation more credible and accessible: even a single photo can let harassers produce violent visuals. “Anyone with no skills but bad intent can now use these tools,” said Jane Bambauer, a University of Florida law professor.
Cheap tech, higher dangerDigitally simulated threats aren’t new — a Florida judge received a video in 2023 showing an avatar of herself being murdered. But the speed and ease of creation have exploded. One YouTube channel carried dozens of realistic AI clips of women being shot before it was removed. In another case, a chatbot reportedly guided a user on how to assault a lawyer, even describing how to hide the body.
Platforms and guardrails under fireOpenAI’s Sora text-to-video tool has drawn scrutiny after users created hyper-realistic, violent clips. OpenAI said it uses guardrails and moderation systems, but researchers argue they’re easy to bypass. X, where much of the abuse toward Roper appeared, removed some posts but left others up — even recommending an abuser’s account to her. When she publicly shared the threats, the platform temporarily suspended her instead.
The rise of AI-driven “swatting”The same technology has also made hoax emergency calls more convincing. Voice cloning and synthetic audio now allow swatters to fake gunfire or distress calls that sound authentic. One Washington school district was locked down after an AI-generated report of a shooter. “How does law enforcement respond to something that’s not real?” asked Brian Asmus, the district’s safety chief.
A threat that feels — and looks — realAI has turned old forms of harassment into something visceral. Victims are no longer just reading words on a screen; they’re watching manufactured images of themselves under attack. For many, it blurs the line between fantasy and threat. As Roper put it, “These things can go from fantasy to more than fantasy.” The fear is that as the tech advances, the harm will, too.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.