9 January, 2026
australia-s-esafety-commissioner-targets-musk-s-ai-image-generation

Australia’s eSafety Commissioner, Julie Inman Grant, has initiated an investigation into the social media platform X, previously known as Twitter, due to a troubling rise in sexually explicit images generated by its artificial intelligence tool, Grok. This action follows a strong warning from UK Prime Minister Keir Starmer, who threatened to ban the platform in the UK if the issue remains unaddressed.

Inman Grant announced on March 1, 2024, that her agency is prepared to utilize its existing regulatory powers to “investigate and take appropriate action” against AI products lacking adequate safeguards. This decision comes as evidence accumulates indicating that X’s technology is being exploited to generate child abuse material.

“I’m deeply concerned about the increasing use of generative AI to sexualize or exploit people, particularly where children are involved,” Inman Grant stated on LinkedIn. Reports indicate that incidents of sexualized images produced by Grok have doubled since late last year, affecting both adults and children. Inman Grant emphasized that companies develop the capacity to prevent the misuse of their products, stating, “We’ve now entered an age where companies must ensure generative AI products have appropriate safeguards and guardrails built in across every stage of the product lifecycle.”

The situation has escalated to the point where the UK’s Ofcom is investigating allegations of online child sexual abuse material purportedly generated by Grok. Starmer condemned the images as “unlawful and intolerable,” insisting that X must take immediate action. “This is disgraceful, it’s disgusting, and it’s not to be tolerated. X has got to get a grip of this,” he remarked.

In Australia, the eSafety Commission reported receiving multiple complaints in recent weeks regarding Grok’s use in generating exploitative imagery. The agency has reached out to X to inquire about the safeguards the platform is implementing to mitigate these risks. A spokesperson for the Albanese government described the creation of generative AI sexualized content without consent as “abhorrent,” affirming the government’s commitment to restricting access to nudification tools within Australia.

New regulations set to take effect on March 9, 2024, will require AI services to limit children’s access to explicit content, violent material, and themes related to self-harm and suicide. These measures follow the eSafety Commission’s enforcement actions against a UK-based service that allowed users to create manipulated images of real individuals, including students from Australian schools. After receiving an official warning, the sites drew approximately 100,000 visitors monthly in Australia before being removed in November 2023.

The eSafety Commission has been proactive in addressing online safety issues. Following the implementation of a world-first social media ban for users under 16, Inman Grant is currently facing potential contempt charges from the US Congress. Congressman Jim Jordan, who chairs the House Judiciary Committee, has accused Australia’s Online Safety Act of threatening free speech for American citizens. He has demanded Inman Grant testify regarding her previous attempts to compel social media platforms to eliminate graphic content.

In a past confrontation, Inman Grant urged X to remove graphic footage of a violent incident in Sydney, arguing that global removal was necessary as Australians could access the content via VPNs. That case eventually did not proceed.

The European Commission is also scrutinizing X, ordering the platform to retain all documents related to Grok while ensuring compliance with EU regulations. The Italian regulator is collaborating with Ireland’s Data Protection Commission, which oversees X’s compliance within the European Union, and has signaled the possibility of further actions.

As international scrutiny intensifies, the discourse surrounding the responsibility of social media platforms in managing AI technologies and protecting users, especially minors, continues to grow.