In its November 12, 2025 edition, Pakistan’s English-language daily, Dawn, carried an inadvertent sentence at the bottom of a business article on auto-sales. It read: “If you want, I can also create an even snappier ‘front-page style’ version with punchy one-line stats and a bold, infographic-ready layout perfect for maximum reader impact. Do you want me to do that next?”
The mistake seemed to have been part of the conversational output from ChatGPT or another AI system: a type of follow-up question these tools often append, asking if a user would like a revised version of the draft. That such an instruction survived the editorial process and was printed in a high-profile news outlet set off massive criticism.
Reactions on social media were quick and largely derisive: Journalist Omar Quraishi tweeted: “Dawn business pages desk should have at least edited the last para out!” Meanwhile former federal minister Shireen Mazari remarked: “At least the last paragraph should have been deleted so at least a veneer of credibility was retained!”
To this, Dawn added an online correction notice saying: “This newspaper report was originally edited using AI, which is in violation of Dawn’s current AI policy… The report also carried some junk, which has now been edited out. The matter is being investigated. The violation of AI policy is regretted.”
Why the mistake matters
The mistake illuminates several disturbing trends: that AI tools, like ChatGPT, are being used in newsroom processes-even by large outlets-sometimes to write drafts or edit; that editorial supervision might be failing-a human should have caught the errant instruction before publication; and, broadly, about credibility: if a trusted paper publishes an obvious AI prompt, how can readers be confident the rest of the piece was properly verified?
The broader background
Other industries have faced similar problems, such as AI-produced reports with spurious legal citations or medical summaries turning out to be fabricated. According to experts, the incident at Dawn is part of a larger erosion of professional standards when the AI becomes a "black-box" writer or editor.
For journalism, the stakes are especially high. The trust in media is fragile, and the use of AI raises new risks: hallucinations, bias, context errors, and the appearance of inattention. For Dawn, the irony is sharp. Founded in 1941 and long considered Pakistan's newspaper of record, the mistake signals even legacy outlets are vulnerable in the race to adopt generative tools.
What to watch
Moving the case forward may prompt media houses to clarify how they use AI-whether tools support writers or if draft text goes directly to print without review. Readers too may become more sceptical and start asking basic questions: Has this piece been fact-checked? Did AI play a role? Who edited it?
Mistakes like these are more than embarrassing in the age of AI; they are warnings. And for newsrooms, the lesson is pretty clear: human judgment still matters.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.