Launch in new window

Nima Astria - deny

You are here

Is AI threatening Aotearoa’s future safety regarding sexual harm?

10 September, 2025

Interview by Alex Fox, adapted by Soeun Kim

Researcher and Doctor of the University of Canterbury, Cassandra Mudgway, says New Zealand’s AI regulation policies should be reinforced due to its international fallback, putting the nation at high risk of AI disruptions. 

As AI becomes more prevalent in society, so have the concerns of AI usage with consent and sexual violation.

Overseas, many countries are developing protection laws against sexual violations caused by AI, such as countries a part of the European Union (EU), and China.

The EU passed the world’s first comprehensive AI legislation in August 2024, strongly restricting AI features such as deepfakes and facial recognition.

Here in Aotearoa, ACT MP, Laura McClure, is pushing her Deepfake Digital Harm and Exploitation Bill, which aims to amend the Crimes Act and the Harmful Digital Communications Act to treat synthetic imagery, such as deepfakes, in the same way as non-consensual intimate recordings. 

Currently, the bill only has the formal support of Te Pāti Māori. As the bill is a member’s bill, it will need to be picked out of the biscuit tin to be debated.

Senior Law Lecturer at the University of Canterbury, Dr Cassandra Mudgway, told 95bFM’s The Wire that the government’s lack of attention on the rising sexual violations caused by AI is making women easier targets to sexual abuse. 

“We're seeing [deepfakes] in amongst all the other online harms that we see being directed at women, particularly women public figures. So that's around harassment, threatening messages, that kind of stuff.”

One of her major concerns is rooted in deepfakes, where AI fabricates and ‘nudifies’ people’s images without their consent. 

Since deepfakes have become increasingly prolific online, she says that it has critically affected women who have no clue that their images are being used for these purposes. 

“A lot of these apps, particularly AI-trained apps, are trained almost entirely on female bodies, which I think really does illustrate the gendered nature of sexualised deepfakes.”

Mudgway criticises AI’s disproportionate discrimination against marginalised groups like women and children. 

On top of sexual harm, she highlights that AI systems are trained on massive data sets reflecting real-life biases from various communities, reinforcing existing gender-based discrimination overall. 

“A piece of research from the UK showed that AI tools being used in some English councils were downplaying women's physical and mental health issues, creating gender bias in the care decisions that would have been made based on these AI-generated summaries.”

Mudgway seeks for New Zealand to implement similar policies found globally that delve deeper into the specifics, such as human rights, transparency, and accountability, while considering the cultural context of Aotearoa.

“Countries like Canada and Australia are developing their own AI frameworks, even the UK, which is usually much more cautious; they've set up a dedicated AI safety institute.

“... regulation is about making sure our values of fairness, transparency, safety, apply to both online and offline uses and development of tech.”

Listen to the full interview