15 December, 2025
public-trust-in-ai-chatbot-therapists-surges-says-curtin-study

A study from Curtin University indicates a significant change in public trust towards AI-driven “chatbot therapists,” particularly during the rapid rise of generative artificial intelligence in 2023. The research highlights a growing acceptance of these AI tools for mental health support, which is influencing the development of a new wellbeing chatbot named Monti.

Data collected before and after the launch of ChatGPT revealed that users increasingly prefer generative AI chatbots over earlier, rules-based models. The conversational style and apparent understanding of these newer systems resonated with users, making previous iterations seem repetitive and less engaging. This shift is driving the redesign of Monti, which is being co-developed with consumers to facilitate safe and meaningful emotional exploration.

Research lead, Professor Warren Mansell from the Curtin School of Population Health, emphasized that 2023 marked a pivotal moment in how the public perceives AI-supported wellbeing tools. “As generative AI entered everyday life, people began to view chatbot ‘therapists’ less as gimmicks and more as potentially credible tools for self-reflection,” he stated.

The ongoing demand for mental health support continues to outpace available resources, prompting the need for responsible AI solutions. Professor Mansell noted that well-designed AI tools can help bridge this gap, provided they are constructed with care, evidence, and humility.

User interviews conducted during the study revealed that individuals appreciated a curious questioning style that allowed them to explore personal goals and challenges. This aligns with the foundation of the research, known as perceptual control theory (PCT), which informs Monti’s development. The chatbot’s guiding principle, “Notice More, Explore Further, Think Wiser,” encapsulates its aim to inspire curiosity and clarity without replacing human interaction.

The research team underscored the importance of responsible innovation in AI, advocating for evidence-based design, transparency, safety monitoring, and a deep understanding of user needs. These principles are integral to Monti’s next stage of development, with plans to introduce the tool to Australian universities by mid-2026.

The findings suggest that, when designed thoughtfully, AI chatbots can play a meaningful role in mental health, empowering individuals to reflect on their concerns and seek human assistance when necessary. The study has been published in JMIR Formative Research under the title, “A Rule-Based Conversational Agent for Mental Health and Well-Being in Young People: Formative Case Series During the Rise of Generative AI.”

With this research paving the way for future innovations, the potential of AI chatbots in the mental health sector appears promising, offering new avenues for support in a landscape that desperately needs it.