URGENT UPDATE: A disturbing investigation reveals that AI-powered “nudifying” apps are exploiting Australian girls as young as 10 years old, prompting immediate calls for tighter regulation. The findings, analyzed by online safety advocate Caitlin Roper from Collective Shout, highlight the alarming ease with which images can be manipulated into violent, pornographic content.
In a shocking revelation, Roper discovered that even images captured without consent—such as those taken in public places—can be transformed into degrading and sexualized depictions. “No one is safe from these apps,” she stated, warning that even those without a digital footprint can become victims.
Authorities report that these apps are being used to create disturbing scenarios, often depicting young women in graphic and humiliating situations. Roper described the content generated as “child sexual abuse material,” with images showing women appearing terrified or in distress. She emphasized the urgency of the situation, saying, “It’s completely unregulated.”
A new law enacted in South Australia aims to combat this issue, criminalizing the creation of wholly AI-generated deepfake images, with offenders facing fines of up to $20,000 or up to four years in prison. However, Roper argues that this legislation does not go far enough, as the apps remain easily accessible to anyone with an internet connection.
Roper’s investigation into 20 nudifying apps revealed the shocking scale of the problem. She quickly created an AI-generated image and was able to strip it digitally in seconds. “Users can select clothing removal, poses, and specific sexual acts, creating highly personalized images,” she said.
Moreover, many of these platforms encourage sharing creations in public galleries filled with violent sexual content, often using prompts that dehumanize women. Roper noted that most of these apps target female images, with little to no functionality for male pictures, suggesting a disturbing trend of male exploitation of women and girls.
The Australian legal landscape regarding these issues is evolving. In September 2023, New South Wales passed new legislation making it an offense to produce or share AI-generated intimate images of identifiable individuals without consent, with penalties reaching up to three years’ imprisonment. However, campaigners are calling for laws that specifically target the creation of such content, not just its distribution.
Dr. Asher Flynn from the Australian Research Council Centre for the Elimination of Violence Against Women echoed these sentiments, emphasizing the need for enhanced legal frameworks that hold digital platforms accountable. “Creation itself must be recognized as a standalone offense,” she stated.
Roper has called for global cooperation to address the issue, suggesting that the United Nations could enforce a ban on these apps, including geo-blocking and restrictions on social media advertising. “We do have some laws in Australia, but they’re not working as intended,” she lamented.
If you or someone you know is affected by online exploitation, support services are available through the Australian Centre to Counter Child Exploitation. For immediate danger, contact emergency services or local authorities.
This urgent situation highlights the critical need for immediate action to protect vulnerable individuals from the dangers posed by unregulated AI technology. Share this information to raise awareness and help prevent further exploitation.