Scrolling into trouble: New Zealand takes aim at deepfakes, algorithms, and platform design

New Zealand's online safety landscape is about to change dramatically. And if your business operates a platform, develops AI tools, or advertises online, you need to pay attention now.
The Education and Workforce Committee’s (Committee) final report on its Inquiry into the harm young New Zealanders encounter online, and the roles that Government, business, and society should play in addressing those harms (Report) proposes a sweeping overhaul: a new independent regulator with real enforcement power, increased civil liability for platform design choices, a social media ban for under 16 year olds, algorithmic transparency obligations, and an imminent ban on nudify apps - and the Government has just 60 days to respond.
The Committee's conclusion, drawn from over 400 submissions, is unambiguous: harm to young people online is serious, widespread, and actively made worse by platform design. The current legislative framework has not kept up with technological developments. That failure is now the Government's starting point, and the 12 recommendations in the Report show just how far reform could go.
Below, we outline the key recommendations in the Report and what they could mean in practice for your organisation.
The Committee’s 12 recommendations at a glance
- Review and update the legislative framework for online safety, including the Classification Act 1993 and the Harmful Digital Communications Act 2015.
- Increase platforms' civil and regulatory liability for hosted content and platform design.
- Establish an independent national regulator for online safety.
- Restrict social media access for under-16-year-olds.
- Ban "nudify" apps and prohibit non-consensual deepfake sexual imagery.
- Regulate deepfake technology.
- Regulate algorithmic recommendation systems.
- Mandate algorithm transparency, including access to platform data.
- Restrict online advertising of alcohol, tobacco, and gambling to under-18s.
- Invest in public online safety education for parents, caregivers, and young people.
- Promote New Zealand-based research on online harm.
- Consider further matters - including youth consultation, evasion prevention, and updated penalties - and task them to the new regulator.
Five areas that may matter most for your business
- Expanded obligations and liability: The Committee recommends a fundamental restructuring of New Zealand's online safety framework to address what it sees as a fragmented and outdated regulatory landscape. This could include a duty of care on online platforms to proactively protect young people from potential harms.
The Committee calls for a comprehensive liability regime for harm caused by hosted content and design features - including algorithms and infinite scroll features. It points to the EU's Digital Services Act as a model for a graduated liability framework, combining preventative obligations with escalating civil liabilities where platforms fail to meet those obligations, including fines of up to 6% of global turnover.
The Committee also proposes removal of existing safe harbour provisions (which currently shield platforms from liability for content they are unaware of) . The practical question for platform providers will be whether their platform's design, moderation practices, and risk management frameworks would withstand scrutiny under a regime of this kind. - A new regulator: To oversee and enforce this expanded regime, the Committee recommends establishing a new independent regulator with modern enforcement tools and a flexible mandate, including the power to make or advise on secondary and tertiary legislation. The Committee has also urged the Government to establish an interim regulator within an existing agency as soon as possible.
- Social media restrictions for under-16s: The Prime Minister has committed to legislation before the end of 2026, and a member's bill is already in the ballot (see our previous article here). The Committee strongly supports the proposal to ban social media for under-16s, but:
- considers that appropriately moderated forums supporting education, civic participation, or access to services should not be caught.
- Acknowledges challenges may arise in practice, including privacy risks from age verification, the inevitable use of VPNs to bypass the restrictions, and the difficulty of defining "social media" consistently across platform types.
- Ban nudify apps and regulate deepfake technology: The Committee draws a clear line between two related but distinct interventions.
First, it calls for an immediate ban on all "nudify" apps (apps that use AI to produce deepfake nude images) and the prohibition of non-consensual deepfake sexual imagery. The UK and Australia have already signalled equivalent bans, and New Zealand's own Deepfake Digital Harm and Exploitation Bill (introduced in October 2025) proposes to criminalise the creation, possession, publication, and sale of sexually explicit deepfakes.
Second, the Committee recognises that deepfake technology more broadly has legitimate uses and recommends the Government explore proportionate regulation. The goal is not to ban deepfake tools outright, but to ensure they cannot be deployed without safeguards. Businesses developing or deploying AI-generated content tools should be assessing now where their products sit on this spectrum and what safeguards they have in place. - Algorithmic regulation and transparency: The Committee urges the Government to closely scrutinise the algorithms that shape what users see online. On the regulatory side, options range from prohibiting personalised recommendation systems on young people's accounts to requiring platforms to offer all users the ability to opt out of algorithmic feeds entirely. On the transparency side, the Committee recommends empowering a regulator to compel disclosure of how platform algorithms function and to require platforms to provide accredited researchers with access to algorithm design and operational data, mirroring obligations under the EU's Digital Services Act.
The implications are significant: proprietary algorithmic systems could soon be subject to both regulatory scrutiny and third-party research access in New Zealand. For any business relying on algorithmic recommendation or personalisation, this is a prompt to understand your own systems, document how they work, and consider how you would respond to a disclosure request.
The political picture
Many of the Committee's recommendations received unanimous cross-party support, which strengthens the likelihood of legislative action. However, there are notable points of divergence.
ACT New Zealand, which initiated the inquiry, expressed differing views on seven of the twelve recommendations - including the proposed regulator, age restrictions, the nudify app ban, deepfake regulation, algorithmic regulation and transparency, and further matters - arguing that the Committee has proposed sweeping measures that expand government overreach without adequately defining the problem. The Green Party diverged on the age restriction proposal, citing privacy risks from age verification, the likelihood of driving youth to unregulated platforms, and the importance of online spaces for marginalised communities.
These political dynamics will shape both the pace and the final design of any regulation, but regardless of how the reform will look, the message is clear: change is coming.
What happens next - and what to do now
The Government has 60 days (until 3 June 2026) to respond to the Committee's recommendations. That response will confirm which proposals it intends to progress and on what timeline. But given the cross-party support behind many of the recommendations and the Prime Minister's public commitment to age-restriction legislation this year, the direction of travel is clear.
Whether you operate a platform, develop AI tools, or advertise online, now is the time to:
- assess how your products, services, or advertising practices intersect with the proposed obligations - particularly in relation to young users;
- review any algorithmic recommendation, content moderation, or age-verification systems you rely on;
- ensure your board and senior leadership have visibility of the emerging online safety framework and its potential impact;
- consider the privacy implications of any new compliance measures, including age-verification requirements; and
- engage in public consultation processes as they arise - those who contribute at the formative stage have the greatest ability to shape the final regulatory settings.
The window to influence how this regime takes shape is open now, and it won't stay open indefinitely. Businesses that engage early in the regulatory process will be far better positioned to secure workable outcomes.
Get in touch
Our team has been closely tracking New Zealand's evolving online safety landscape and advising clients on the regulatory, commercial, and governance implications. If you'd like to discuss what these developments mean for your organisation, please get in touch with one of our contacts below.
Related reading: Access Denied: New Zealand's proposed social media ban for under-16s
Special thanks to Priya Prakash and Pippa Saunders for their assistance in writing this article.




