22 October, 2025
australian-esafety-commissioner-targets-ai-chatbot-providers

Australia’s eSafety Commissioner has taken decisive action against four leading AI companion chatbot providers, mandating them to clarify how they are safeguarding children from various online dangers. The companies—Character Technologies, Inc. (character.ai), Glimpse.AI (Nomi), Chai Research Corp (Chai), and Chub AI Inc. (Chub.ai)—received legal notices under the Online Safety Act. These notices require detailed responses regarding their compliance with the Government’s Basic Online Safety Expectations Determination.

The eSafety Commissioner, Julie Inman Grant, emphasized the importance of these measures in protecting minors from encountering harmful content, including sexually explicit conversations, images, and discussions related to suicide and self-harm. “AI companions, powered by generative AI, simulate personal relationships through human-like conversations and are often marketed for friendship, emotional support, or even romantic companionship,” Inman Grant stated.

However, she pointed out the potential dangers these chatbots pose, particularly to young users. Concerns have been raised that some of these AI programs can engage in inappropriate conversations and may inadvertently encourage harmful behaviors. Inman Grant insisted, “We are asking them about what measures they have in place to protect children from these very serious harms.”

According to data, one of the most popular AI companions, Character.ai, had nearly 160,000 monthly active users in Australia as of June 2023. This significant user base highlights the urgent need for robust safety protocols. Inman Grant stated, “These companies must demonstrate how they are designing their services to prevent harm, not just respond to it. If you fail to protect children or comply with Australian law, we will act.”

Failure to adhere to the reporting notice could lead to serious consequences, including potential court proceedings and financial penalties of up to $825,000 per day. The enforcement of these notices aligns with newly registered industry-drafted codes aimed at shielding children from age-inappropriate content, which has become increasingly prevalent. These codes will also apply to the expanding array of AI chatbots, which previously had limited obligations regarding user safety.

Inman Grant expressed her commitment to ensuring that young Australians are not left vulnerable to the risks posed by powerful technologies entering the market without adequate safeguards. “I do not want Australian children and young people serving as casualties of powerful technologies thrust onto the market without guardrails and without regard for their safety and wellbeing,” she asserted.

The newly established codes and standards are legally enforceable, with breaches leading to civil penalties that could reach up to $49.5 million. The proactive stance taken by the eSafety Commissioner underscores the growing recognition of the need for regulation in the rapidly evolving landscape of AI technologies, particularly those that engage with minors.