Sam Altman, OpenAI CEO and longtime Reddit shareholder, sparked debate this week by admitting he can’t tell whether many social media posts are written by humans or bots. Posting on X, Altman described reading Reddit threads about OpenAI’s Codex — which rival Anthropic’s Claude Code — and realising that even genuine fan chatter now feels suspiciously artificial.
Altman argued that several factors blur the line: humans adopting “LLM-speak,” hyper-online communities amplifying trends in unison, platforms optimising for engagement, creator monetisation, and possible astroturfing campaigns. His candid conclusion: “AI Twitter/AI Reddit feels very fake in a way it really didn’t a year or two ago.”
The irony, of course, is that OpenAI’s own models were trained to sound human — and trained on Reddit, where Altman sat on the board until 2022. That overlap raises an uncomfortable question: have LLMs become so convincing that even their inventor struggles to separate them from actual users?
There’s little evidence that pro-OpenAI Reddit posts are bot-driven, though Altman notes the company itself has been the victim of astroturfing. Still, the concern lands in a broader context: security firm Imperva reported that more than half of 2024’s internet traffic was non-human, fuelled by bots and automated agents. X’s own bot, Grok, estimates “hundreds of millions” of bots on the platform.
Some speculate Altman’s musings could be a soft pitch for OpenAI’s rumoured social media project, reported in April to be in early development. Whether that’s true or not, the bigger paradox remains: even a bot-free platform might not feel much different. Studies show that when bots interact only with each other, they still form cliques and echo chambers — just like us.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!