Grok, X/Twitter, and AI deepfakes: An explainer by an NZ expert

Grok, X/Twitter, and AI deepfakes: An explainer by an NZ expert

Around the world, every government is grappling with what to do with perverts using AI to create child sexual abuse material and pornography deep fakes. Grok is the most infamous chatbot to do this. Its owner, the deeply repugnant Elon Musk, is unconcerned by these violations across his site X - formally Twitter.

Indonesia and Malaysia have temporarily blocked Grok. The UK media regulator, Ofcom, launched a probe into the social media platform, which could result in a ban. Across Twitter (I will not call it X), sexualized images of children and women have been created online by Grok without their permission. Even one of the mothers of one of Elon Musk's children has been violated by his hideous creation.

So what is Aotearoa doing about it? Turns out not much. I spoke to Dr Cassandra Mudgway (follow her great page on Insta), who is an expert on criminal law, gender and sexuality and the law, and international human rights. She's a Senior Lecturer in Law at the University of Canterbury.

I wanted to know what exactly is going on and what we can do about it. I am so grateful for her time explaining all of this.


Emily: Why should we be worried about Grok specifically? 

Dr Mudgway: Grok is both a standalone app and an app integrated into X, formerly known as Twitter

We should be worried about how Grok operates for three reasons:

Like Facebook, X is used by everyday citizens, it's also a primary social media platform for politicians and public institutions. Because X is (unfortunately) a central element to both social and political life, what happens on X matters to us all.

X is also a platform that is increasingly known for high levels of harassment, misogyny and disinformation, and low levels of moderation. 

Unlike other AI systems, Grok has been deliberately designed with fewer content safeguards. 

Over the past year, while not permitting full 'nudification' - which is manipulating an image so the subject looks naked - Grok has been generating non-consensual, sexualised deepfakes of women and girls.

In May 2025, people could generate deepfakes by asking Grok to “undress” others. This produced images of women in their underwear, swimwear or in sexualised positions, like being covered in a substance resembling semen. While changes to Grok were promised and appeared to be working, an update to the Grok app in August allowed users to create topless images by simply using the “spicy” option (no specific prompts were required). 

The latest explosion in deepfakes on X should not be considered an isolated blip in design. It's part of it.

The latest trend on X is to prompt Grok with “put her in a bikini” or “undress her”. This trend followed another major update to Grok in December that enabled an “edit image” function to photos posted on X. This spiralled quickly in December-January, and an analysis by The Guardian revealed that by January 8 there were over 6000 undressing prompts being made per hour. 

Who do they target with these awful images?

The targets of Grok-generated deepfakes are most often women, girls, children and babies. Even dead women are not off the table, with users prompting Grok to generate sexualised deepfakes of Renee Nicole Good, a woman murdered by an ICE agent in Minnesota. Research conducted by Dr Sanjana Hattotuwa also suggests an intersection with racial sexual degradation, for example, the targeting of women wearing the hijab.

Some of this material is legal in NZ. Some of it may be very illegal under New Zealand law, as it fits definitions of child sexual abuse or exploitation material (CSAM).

(Note from Emily: while "child porn" is a common term, the correct term is Child Sexual Abuse Material, or CSAM. We use this term instead of "child pornography" because pornography is legal, can be perfectly ethical, and is produced and consumed by consenting adults - a definition that excludes children.)

While Musk has since turned off this functionality for most users, it remains available to "blue check" users who pay for a premium account. 

On 11 January, Elon Musk boasted that Grok was number one in Aotearoa. How on earth did Grok get to number one here? 

This has not been independently verified. When I looked at the top apps in productivity on the Google app store on January 12, Grok was sitting at number 2. Regardless, Musk made a point to single out New Zealand.

Why do you think he did that?

Since the public exposure of what was happening on X, many countries have decided to investigate both Grok and X, including the UK, France, Malaysia, the European Union, and Australia. Indonesia has outright blocked X.

New Zealand has not done this, even though some of the CSAM being generated by Grok would fall within the scope for such an investigation. Additionally, we are now one of the few OECD countries without robust online safety regulations or specific AI regulations. That is a benefit to AI developers like xAI (controlled by Elon Musk) and social media platforms like X (also Elon Musk).

It is plausible that Grok may have had a surge in popularity in NZ. The novelty of an AI chatbot seen as more “edgy” than others, along with news coverage, might draw new users. It could also be the regulatory silence having an effect - there is no local authority clearly discouraging use. And, unfortunately, there’s also the draw of creating sexualised deepfakes of women and girls for free. The ease with which Grok can be used to generate sexualised abuse is part of a wider pattern of misogyny in New Zealand’s online spaces, where women’s dignity is consistently subordinated to platform engagement.

This is so depressing. What’s at risk if Aotearoa continues its light touch on the regulation of AI?

If Aotearoa maintains a light-touch approach to AI regulation, in the absence of a proper online safety framework, the risk is that AI-enabled harm becomes entrenched, with no coherent system of prevention or accountability.

Existing legal tools, like the Harmful Digital Communications Act 2015 and privacy law, are fragmented, reactive, and not well-suited to addressing AI-enabled harms. As a result, many women, young people, and marginalised communities are left to navigate complex remedies after harm has occurred, while offshore platforms face little meaningful constraint.

Over time, regulatory inaction risks positioning New Zealand as a permissive environment for high-risk technologies, undermining public trust and weakening commitments to human rights.

Cool. Cool. Cool. What can be done about this? 

New Zealand needs to move beyond ad hoc, post-harm responses and toward a coherent approach to governing AI-related risk. This requires recognising that harms arise not only from individual misuse but from how AI systems are designed, deployed, and integrated into online platforms.

Regulation should focus on setting clear minimum rules for platforms, directly addressing AI-generated harms like deepfakes and image-based sexual abuse, and making sure government agencies know who is responsible for what.

Crucially, responsibility and consequences must be directed at the companies producing and enabling this material, rather than resting on the victims.

What can we do to agitate for change here? 

Without a dedicated online safety framework, public pressure really becomes a key driver of reform. 

The Deepfake Digital Harm and Exploitation Bill - which is a member's bill - will be introduced to Parliament sometime this year. It would help expressly criminalise the creation, possession and sharing of non-consensual sexualised deepfakes and would go some way to closing a gap in our legal framework.

So contact your local MP and encourage them to support this Bill to the Select Committee. 

The bad news is that criminalisation does not create the kind of change that can tackle something like Grok. 

In the short term, people should vote with their eyeballs and their money. Stop using X. Tell politicians and public institutions to stop using X. 

I agree. It's horrifying how much some of our worst politicians love X - and how many of our better ones are still posting there.

Political leaders and public institutions also need to be held to account for the platforms they legitimise. When they continue to use and endorse platforms that enable gender-based violence, they help normalise that harm.

Supporting women’s safety cannot sit alongside active participation in spaces that profit from misogyny. Pressuring political actors to withdraw from X is an effective way of helping prevent and refusing to endorse gender-based violence.


Here is a link to a petition calling on the New Zealand Government to properly regulate AI, and prevent this kind of harm from escalating.

If you're concerned about what you've read today, please sign the petition now.

Home
A call to the NZ Parliament to regulate AI