
Australia is sharpening its regulatory focus on artificial intelligence — and this time, app stores are firmly in the crosshairs.
The country’s eSafety regulator has warned that AI apps allowing minors to access adult material, extreme violence, self-harm or eating disorder content could soon be blocked. If platforms fail to comply by March 9, penalties could reach $35 million.
That could put the spotlight squarely on the Apple and its App Store, which hosts a growing number of AI-driven chatbots and generative tools.
Not just social media anymore
Apple has already updated its age-assurance tools in Australia to comply with new rules affecting social media platforms. But regulators are now widening the scope to include artificial intelligence services — particularly chatbots that may expose under-18s to harmful or inappropriate material.
The eSafety office has made clear that it is willing to use its full enforcement powers against “gatekeeper services,” a phrase that includes app marketplaces and search engines. In effect, if individual AI developers do not implement age checks, the burden may shift to the storefronts distributing them.
A patchy record on compliance
Out of 50 AI platforms surveyed by regulators, only nine had fully implemented age-verification systems.
Some have opted for broader safety nets. Popular AI services such as ChatGPT, Replika and Claude have introduced blanket content filters in Australia. Character.AI has restricted open-ended chats to users aged 18 and above, while HammerAI has blocked Australian access entirely.
Others appear far less prepared. According to the regulator’s findings, several AI apps offer no meaningful age filtering at all — and in some cases, not even a clear reporting channel for breaches.
Even rating classifications are uneven. Character.AI carries an 18+ label, Elon Musk’s Grok is rated 16+, while the ChatGPT app is rated 13+. That inconsistency may soon become a regulatory headache.
Concerns over excessive chatbot use
Beyond content exposure, regulators are also worried about usage patterns. The eSafety commissioner noted cases where children as young as 10 were reportedly spending up to six hours a day interacting with AI chatbots.
Officials argue that some AI companies deploy emotional manipulation techniques and anthropomorphic design to deepen engagement — effectively encouraging prolonged use among younger users.
The broader message is clear: AI is no longer being treated as an experimental technology operating in regulatory grey zones. In Australia at least, it is being folded into the same accountability framework that now governs social media.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!