Australia Cracks Down on ‘Nudify’ Sites Fueling AI-Generated Child Abuse

Updated by Faith Barbara N Ruhinda at 0749 EAT on Thursday 27 November 2025

Australia has moved to block access to several websites that used artificial intelligence to create child sexual exploitation material, according to the nation’s online safety regulator.
The eSafety Commissioner, Julie Inman Grant, said Thursday that three “nudify” platforms withdrew from Australia after being formally warned.

Grant’s office said the sites had been drawing about 100,000 visits a month from Australians and were linked to high-profile cases of AI-generated child sexual abuse imagery involving school students.

Advert.
Advert.

She said the “nudify” services — which use AI to make images of real people appear naked — have had a “devastating” impact in schools.

“We took enforcement action in September because the provider failed to introduce safeguards to stop its services being used to create child sexual exploitation material, and was even marketing features such as undressing ‘any girl,’ ‘schoolgirl’ image generation and a ‘sex mode,’” Grant said.

The action followed a formal warning to the UK-based company behind the sites, which faced potential civil penalties of up to 49.5 million Australian dollars ($32.2m) if it did not add protections against image-based abuse.

Advert.

Grant said Hugging Face, an AI-model hosting platform, had also taken steps to comply with Australian law, including updating its terms of service to require account holders to reduce misuse risks.

Australia has positioned itself at the forefront of global efforts to combat online harm against children, introducing a social media ban for under-16s and targeting apps used for stalking and the creation of deepfake imagery, Al Jazeera reported.

Concerns have risen over the use of AI to generate non-consensual sexually explicit images, as increasingly sophisticated platforms allow users to produce photo-realistic material with minimal effort.

Advert.

A survey conducted last year by the US-based advocacy group Thorn found that 10 percent of respondents aged 13–20 knew someone who had been targeted with deepfake nude imagery, while 6 percent said they had been directly victimised.

Invest or Donate towards HICGI New Agency Global Media Establishment – Watch video here

Email: editorial@hicginewsagency.com TalkBusiness@hicginewsagency.com WhatsApp +256713137566

Follow us on all social media, type “HICGI News Agency” .

Leave a comment