The global AI regulatory landscape continues to shift and evolve. A year on from our first review of global AI regulations, we have provided an update on the latest AI regulatory developments in New Zealand and a snapshot of the approaches to regulating AI taken by some of our key trading partners. 

That snapshot reflects a wide spectrum of views on how governments should best manage advances in AI. At one extreme, the US is undoing existing regulations in favour of a free market approach, while at the other, the EU continues to implement the extensive requirements of its AI Act on a phased basis.

What’s new for NZ?

The New Zealand approach - using existing legislation and focusing on policies and principles rather than binding rules - is in keeping with the approach taken by most of the jurisdictions we survey below. 

Click each of the jurisdictions to read our survey findings.

The New Zealand government has remained steadfast in its decision to regulate AI in a “light touch, proportionate” way through amending existing legislation and introducing new principle-driven frameworks, rather than introducing an overarching AI statute. 

One legislative measure utilised for AI regulation is the Privacy Act 2020. In August, the Privacy Commissioner (OPC) published its Biometrics Code of Practice (Biometrics Code) to regulate the collection and use of biometric information together with associated guidance materials. You can read more about the Biometrics Code in our article here.

The adoption of the Biometrics Code came swiftly after the OPC released its findings into the Foodstuffs North Island (FSNI) facial recognition technology (FRT) trial in selected New Zealand supermarkets. The OPC found that the FSNI FRT trial complied with the Privacy Act due to the robust privacy safeguards implemented by FSNI during the trial. While it does not “green light” the use of FRT in New Zealand, the OPC’s report includes useful guidance addressing how FRT could be implemented lawfully in New Zealand. You can read more about the OPC’s findings in our recent article here.

A members’ bill, known as the Deepfake Digital Harm and Exploitation Bill was introduced to Parliament earlier this year. The Bill aims to amend the Crimes Act 1961 and the Harmful Digital Communications Act 2015 by including digitally altered or synthesised images within the definition of an intimate visual recording. The proposed changes are intended to address the proliferation of deepfake pornography and extend legal protections for those whose likeness is used without consent.

In late June, New Zealand’s Government Chief Digital Officer (GCDO) appointed an AI Expert Advisory Panel to guide the responsible use of artificial intelligence within the public service. The Government has also released guidelines which set out expectations for how agencies should use generative AI (see here), to support responsible uptake and use of generative AI technologies across the public sector.

In early July, the Government released New Zealand’s first AI Strategy, which aims to accelerate private sector AI adoption and innovation. It is aimed at encouraging AI adoption in the private sector, while still managing risks responsibly. Alongside it, MBIE released its Responsible AI Guidance for Businesses, giving practical tips to help organisations apply the strategy in real life. The AI strategy emphasises the Government’s previously stated desire to avoid AI-specific legislative reform, highlighting reliance on existing legislation and regulations for privacy, consumer protection and human rights to manage risks. The AI strategy also confirms the importance of international collaboration and compatibility, incorporating within it the OECD AI principles as the guiding ethical framework for responsible AI. It is hoped that this AI strategy will help encourage AI adoption rates across businesses, by providing reassurance of a supportive policy framework, a commitment to remove unintended barriers to AI use, and to provide clear regulatory guidance.

Concerns are mounting over the government’s “light touch, proportionate” approach to AI regulation. In an open letter, more than twenty AI experts have urged the government to take strong action, calling for a bi-partisan effort to produce risk-management-based AI regulation and to establish a national AI oversight body. The letter highlights New Zealanders’ low level of trust in AI - ranking third to last of 47 nations in KPMG's global AI trust study - and points to other global indicators that reflect widespread public concern. To date, the government has yet to respond publicly. 

The Privacy Commissioner has joined 18 other regulators worldwide in signing a joint statement on trustworthy AI data governance. The statement recognises AI’s potential, but also its risks, emphasising that privacy-by-design principles should be built into AI systems, and that AI should be developed in accordance with data protection and privacy rules. Signatories committed to clarifying lawful data processing, setting standards, sharing safety practices and reducing legal uncertainty to support innovation. 

The EU AI Act took effect from 1 August 2024, although its provisions are subject to a staged implementation. Since 2 February 2025, AI systems categorised as presenting an ‘unacceptable risk’ have been banned; these being systems that pose a clear threat to the safety, livelihood and rights of people.

The General-Purpose AI (GPAI) Code of Practice was published by the EU’s AI Office on 10 July 2025 (here). Though not mandatory, GPAI model providers can adhere to the Code of Practice to demonstrate compliance with the GPAI model provider obligations under the AI Act, until such time as the European Standards come into effect.

The next substantive tranche of requirements, which relate to notified bodies, general purpose AI models, governance, confidentiality and penalties took effect from 2 August 2025.

The EU AI Act was expected to be fully implemented by 2 August 2026. However, delays are emerging to the implementation programme, perhaps demonstrating how difficult comprehensive AI regulation is to manage in practice. In July, the CEOs of 46 large tech companies wrote to the Commission requesting a two year pause in rollout of the Act, citing unclear, overlapping and increasingly complex rules as disrupting their abilities to do business in Europe. The European Commission has rejected the moratorium, but intends to introduce a digital simplification package in December to aid in ensuring the industry understands and can apply the rules. The European Union regional bloc has also been facing intense pressure from the US Government and Big Tech over the Act, and the drafting of the GPAI Code of Practice in particular. 

In late 2024 the UK Government released an update to its 2023 AI Regulation White Paper (here), which reinforced that the UK Government does not intend to enact an overarching AI statute in the near future. Rather, the UK Government will continue to iterate on sector-specific, principles-based frameworks underpinned by five core principles: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and redress.

However, in March 2025, a private members’ Artificial Intelligence (Regulation) Bill was introduced to the House of Lords. If enacted, the Bill will introduce several governance structures, effectively codifying the principles outlined in the Government’s White Paper. It would also create a centralised AI Authority that would oversee AI development and ensure compliance with new legal requirements. As a private members’ bill in the House of Lords, the Bill does not have the backing of the UK Government and faces an uncertain future.

Meanwhile, the four key regulators under the Digital Regulation Cooperation Forum (DRCF) continue to publish guidance documents and frameworks to advance AI regulation in the UK. For example, the Information Commissioner’s Office (ICO) earlier this month launched a new AI and biometrics strategy outlining in further detail its plans to regulate high-risk applications including AI in automated decision making, facial recognition, and the training of generative AI models.

On 11 June 2025, the UK Government’s controversial Data (Use and Access) Bill was passed and is set to become law. The Bill introduces reforms designed to modernise and streamline legislation as part of an effort to bolster data use and governance across key industries in the UK. One of the more controversial aspects is that, once passed, the Bill will allow technology companies to use creative content for AI development without first getting permission from a rightsholder, unless the rightsholder has actively opted out.

In January 2025 President Trump issued an Executive Order for Removing Barriers to American Leadership in AI, which rescinded former President Biden’s Executive Order for the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. Trump’s Executive Order has called for all federal departments and agencies to review any “barriers” put in place during Biden’s term, to better position the US as a global leader in AI innovation.

Following this, in late May, the U.S. House of Representatives narrowly passed a bill that proposed a moratorium preventing individual states from enforcing laws that regulate AI for a period of ten years. The bill was somewhat controversial as, while technology companies welcomed the approach, state policy makers were concerned about the implications for responsible and ethical development of AI in the country. However early this month, the U.S. Senate convincingly voted to strike down the moratorium from the bill by 99 - 1 (though this may not be the last we hear of it).

In late September, California became the first state to mandate public, standardised safety disclosures from developers of advanced AI models, under the newly enacted Transparency in Frontier Artificial Intelligence Act.

China is set to introduce new ‘Labelling Rules’, which will make it mandatory for AI-generated content to be labelled as such, both implicitly (that is, within the metadata) and explicitly on the content itself, whether it be text, audio, images and the like.

Japan took a significant step in its AI regulatory regime in May by assenting the “Act on the Promotion of Research, Development and Utilisation of Artificial Intelligence-Related Technologies”, which came into force on 4 June 2025.

The Act establishes guiding principles and policy mandates, setting a framework for future laws and policies on AI rather than imposing specific requirements or strict penalties for non-compliance. For the most part, the Act relies on existing legal frameworks and business cooperation to regulate AI. However, the Act introduces some new powers that will enable the Japanese government to (amongst other things) conduct investigations and disclose the names of entities involved in harmful AI use (such as facilitating crimes, leaking personal information or infringing copyright). The Act is likely to be supplemented by enhanced measures against issues relating to deepfakes, particularly deepfake pornography.

Singapore does not have an overarching AI statute, but is instead taking a sectoral approach with individual ministries, authorities and commissions publishing guidelines and regulations.

In May 2024, the Singapore Government released the Model AI Governance Framework for Generative AI, otherwise known as the “Model Gen-AI Framework”. The Framework is designed to offer guidance to key stakeholders in building generative AI through addressing the risks, while supporting innovation and responsible development. Alongside this, it launched a Generative AI Evaluation Sandbox to facilitate experimentation based on real-world use cases.

The non-binding Framework reflects Singapore’s preference for a principles-based, multi-stakeholder approach to AI governance. The Framework will continue to evolve as technology and policy discussions progress.

In September 2024, the Federal Government consulted on proposed legislative measures to mandate ten “guardrails” for high-risk AI systems. The public consultation closed late last year, but no legislative proposals have yet emerged.

In the meantime, discussions on how current legislative frameworks can regulate the evolving AI landscape continue. Australia is actively developing a risk-based regulatory framework with voluntary ethics principles, while also relying on existing laws to address AI-related risks. Various regulators in Australia, most notably those forming part of the Digital Platform Regulators Forum, have been releasing their own papers and guidelines relating to AI. In addition, discussions of regulating AI in particular industries continue. For example, Australia’s Therapeutic Goods Administration (TGA) has been considering whether its current legislative framework needs to be updated to address the challenges that AI presents. This has been prompted by the growing concern that AI products are being used for health and therapeutic purposes yet may fall outside the current therapeutic goods regulatory framework. The TGA’s review released earlier in September confirms current medical device regulations are broadly suitable, but targeted reforms may be needed to address key gaps (including for AI tools such as medical scribes and digital mental health apps).

We have also seen the Australian Government release a set of AI and cyber risk model clauses, designed to assist public sector agencies in procuring digital products involving AI.

Concluding remarks

The EU continues to lead the regulatory charge with its stringent measures under the AI Act, although with delays expected to the implementation programme perhaps demonstrating how difficult comprehensive AI regulation is to manage in practice.

Globally, we are seeing a trend toward less stringent regulation, with many jurisdictions opting to utilise existing legislation, flexible regulatory mechanisms (such as guidance, codes of practice, and AI principles), and Government policy statements rather than dedicated AI-specific regulations. Where regulation is being introduced, it tends to be targeted towards specific industries (eg healthcare) or specific types of AI harms (eg deepfake pornography).

Regardless of whether there is specific AI regulation or if existing laws will apply, it is clear that the adoption of AI and the growing sophistication of AI will continue to create challenges for lawmakers in forcing the development of AI and harnessing the clear benefits that can be gained through the adoption of AI, while also ensuring that there are appropriate safeguards, and protections against misuse and potential harm. At the same time, developers, deployers and users of AI solutions and tools need to be aware of, and comply with, potentially applicable laws, and also be mindful of harms that could arise through the use of AI, including those that may not be immediately obvious.

Get in touch

Please reach out to one of our experts for further information.

Special thanks to Bridget de Latour for her assistance in writing this article.

Contacts

Related Articles