1 January, 2026
ai-fuels-rising-threat-of-online-child-exploitation

The investigation into a Sydney IT worker has revealed alarming trends in the use of artificial intelligence (AI) for online child exploitation. When police raided the home of Aaron Pennesi in Matraville, they discovered not only stolen personal information from colleagues but also 54 disturbing images of child abuse, all generated through AI technology. This case marks a significant moment in New South Wales, as it is believed to be the first instance of an individual being sentenced for possessing AI-created child abuse material.

In May 2023, Pennesi received a two-year community corrections order after pleading guilty to charges of possessing child abuse material and unauthorized modification of data. The findings from this case highlight a troubling new frontier for law enforcement grappling with the implications of rapidly evolving technologies.

Alarming Growth of AI in Child Exploitation

The scale of AI’s involvement in child sexual exploitation is staggering. According to the National Centre for Missing and Exploited Children (NCMEC), the tipline received over 440,000 reports of generative AI being used in child sexual exploitation between January and June 2025. This figure is a dramatic increase from the same period in the previous year, which recorded only 6,835 reports. Offenders are increasingly using real images of children in combination with AI to create abusive content, often to blackmail victims.

In a survey conducted by the Australian Institute of Criminology, it was found that one in ten Australian adolescents reported experiences of sextortion. Of these cases, two in five involved digitally manipulated images. Sarah Napier, research manager at the AIC, noted that in darknet forums, offenders share methods for generating child sexual abuse material (CSAM) using AI, often altering non-sexual images of children to produce harmful content.

The eSafety Commissioner of Australia reported a staggering 218 percent increase in AI-generated child sexual abuse material over a year. This hyper-realistic content poses significant challenges for law enforcement agencies, which must distinguish between real victims and AI-generated images. Napier emphasized the severe impacts of digitally manipulated materials on adolescent victims, indicating that they often suffer more significant psychological harm.

Legal Repercussions and Emerging Trends

In July 2024, a Victorian man received a 13-month prison sentence for offenses related to child abuse images, including the creation of 793 AI-generated images. He admitted to using an AI program to produce this material, illustrating the growing trend of leveraging technology for exploitation. Earlier that year, a Tasmanian man was sentenced to two years in prison for possessing and distributing child abuse material, including AI-generated content. His case was initiated following a tip-off from the NCMEC, leading to a raid that uncovered hundreds of files.

The emergence of “nudifying” apps has further complicated the landscape. Young people are using these applications to manipulate images of peers for bullying purposes, while criminals exploit them for extortion. The Internet Watch Foundation reported that some AI-generated material is visually indistinguishable from that created with real victims, complicating efforts to protect children.

One particularly disturbing case involved a child named “Olivia,” who was abused from the age of three until her rescue at eight. Even after her rescue, her likeness was used to create AI-generated abuse material, demonstrating the long-lasting impact of such exploitation.

Advocates are calling for urgent action to combat the rise of AI-generated child abuse material. They urge AI companies to implement safeguards against the creation of such content, while also pushing search engine providers to deplatform harmful applications. App stores are being asked to remove nudifying apps that are increasingly used for malicious purposes.

Napier stressed the need for social media companies to enhance their monitoring and reporting mechanisms for AI-generated child sexual abuse material. “Tech platforms must introduce measures that prevent the production and sharing of AI-generated CSAM,” she stated, emphasizing the importance of user education and tighter security protocols.

As technology continues to advance, the challenges surrounding AI and child exploitation are expected to grow. Authorities and advocates are faced with the critical task of addressing these issues to protect vulnerable individuals from the devastating impact of online predators.