2025 marked a turning point in the AI landscape. Organisations worldwide moved from experimenting with AI to embedding it deeply into their core business functions. Regulatory developments accelerated in parallel, with the achievement of major regulatory milestones in the EU, the release of significant court decisions overseas, and important developments in New Zealand’s privacy and biometrics rules.

As governments work to keep pace with fast moving technology, 2026 is shaping up to be a defining year for global AI regulation. We break down the key themes to watch, and what they mean for New Zealand businesses.

Key takeaways

  • AI regulation will intensify globally, with emerging legal and ethical issues continuing to challenge regulators as AI becomes more deeply integrated into society.
  • New Zealand’s regulatory approach will likely be influenced by jurisprudence emerging from several high-profile cases due to be heard in the US and UK courts later this year, and by the rules on regulating high-risk AI systems that will take effect in August under the EU’s Artificial Intelligence Act.
  • While global rules and norms continue to be established, we expect public figures will increasingly turn to trade marks and other traditional legal tools in an effort to protect their personality rights from misuse by AI.
  • AI will continue to raise complex governance challenges for organisations. However, investment in responsible AI governance is fast becoming a strategic differentiator, enabling organisations to navigate complex regulations, strengthen trust, and accelerate safe adoption.

Protection of personality rights will take centre stage 

AI makes it easier to replicate voices, faces and other personal attributes. This is increasingly problematic with scams and unapproved commercial use of people’s likenesses. A recent high-profile (and extremely concerning) example of this is the Grok AI chatbot, which has come under scrutiny worldwide for its use in producing sexualised deepfakes of people (predominantly girls and women) without their consent and distributing them via the X platform. The chatbot continued generating sexualised images of people despite being explicitly told the subjects did not consent. Several legal investigations have followed. 

We expect that public figures will continue to look for ways to protect their personas against AI-generated deepfakes and other unauthorised exploitation.

A notable example is actor Matthew McConaughey, who recently secured eight trade mark registrations in the U.S. covering elements of his image and brand, including his well known “alright, alright, alright” catchphrase. Amongst his accepted marks were multiple motion marks, a non-conventional form of trade mark protecting a specific movement or sequence of movements. One such motion mark was a video sequence of McConaughey standing in front of a porch smiling. This strategy aims to create a legal perimeter around his likeness and provide grounds to challenge AI generated imitations.

This approach is novel, and its effectiveness is yet to be tested in litigation. The outcomes of such cases may shape whether trade marks can serve as a viable tool to safeguard personal image rights against AI misuse. Nevertheless, we expect other public figures will likely follow McConaughey’s example. 

In New Zealand, people can register trade marks for their name, still images of their likeness, video recordings of themselves, and sound recordings of their voice. Examples include word trade mark registrations for LORDE, EDMUND HILLARY, and JONAH LOMU. There are no local precedents yet involving AI related likeness trade mark infringement; however, this is not to say high-profile individuals in New Zealand won't try a McConaughey-type approach soon. 

Protection can also be sought in New Zealand under privacy law, copyright, misrepresentation and defamation torts, misleading conduct under the Fair Trading Act 1986 and (if enacted into law) the Deepfake Digital Harm and Exploitation Bill (discussed in our earlier article here). Criminal enforcement measures can also be taken against fraudulent uses of deepfakes.

It will be interesting to see what measures online platforms take to address deepfakes, including under their terms of use and handling of complaints.

Overseas litigation will help define global AI rules 

A wave of high profile lawsuits overseas is set to shape the rules governing AI training data and model outputs.

Key cases to watch include:

  • Getty Images v Stability AI (UK High Court, 2025)
    The court held that training an AI model on copyrighted images does not make the model itself an infringing copy. Getty was partially successful in establishing that the watermarks reproduced in the images generated by Stability’s AI model infringed Getty’s trade marks. The court emphasised the need for real world evidence of infringing outputs when assessing whether a trade mark has been infringed, setting an important threshold for “output liability.” 

    Getty has reportedly been granted permission to appeal the decision. However, the UK case is only one part of Getty’s broader multi-jurisdictional litigation strategy and Getty has indicated in media reports that it will take forward the findings of fact from the UK ruling to its US case. 
  • New York Times v OpenAI (US)
    This is a major upcoming test of whether training AI models on journalistic text violates copyright law and whether AI generated outputs can amount to derivative works. The decision may reshape global licensing expectations for data used for training AI models.
  • Disney v Midjourney (US)
    This case focuses on AI outputs that allegedly replicate Disney’s character designs and visual styles. It is expected to become a landmark ruling on look alike content and model liability.

While these cases sit outside New Zealand, their influence could be substantial. New Zealand courts often look to overseas jurisprudence in emerging areas of law, and many local organisations rely on international AI tools that may be affected by these rulings.

The EU AI Act: A make or break year 

The EU Artificial Intelligence Act (EU AI Act) entered into force on 1 August 2024 and is being phased in progressively. From 2 August 2026, the bulk of the obligations are expected to come into force, including:

  • High risk AI systems
    Obligations relating to quality management, technical documentation, data governance, human oversight, logging, post market monitoring and serious incident reporting.
  • AI systems interacting with individuals
    Rules for chatbots, emotion recognition and biometric categorisation, and mandatory labelling of AI generated or manipulated content.
  • Developer obligations
    Requirements for human oversight, ensuring representative input data (where controlled), operational monitoring, log retention, suspension of use if risks arise, and (in some cases) fundamental rights impact assessments.

The EU AI Act has extraterritorial reach. New Zealand organisations may be captured if:

  • their AI systems are made available to EU users, or
  • their AI outputs are used in the EU.

The EU AI Act is shaping up to become the global regulatory benchmark. While few countries are expected to replicate it in full, we expect many of those that do decide to adopt AI-specific laws will follow with targeted regulation of high risk use cases (particularly in healthcare, finance, critical infrastructure and public services).

New Zealand’s approach to AI regulation contrasts with that of the EU. The EU is regulating AI via its comprehensive statute, whereas New Zealand currently adopts a "light-touch, proportionate, risk-based approach" to regulation which focuses on amending or supplementing our existing laws where necessary to address novel AI risks. However, with this being an election year, we will be watching closely to see if any political parties signal a change to this approach in their election stance, particularly in light of a growing number of AI professionals calling for New Zealand to regulate AI

AI is challenging traditional governance models

AI governance is becoming more complex and resource intensive. Organisations now face:

  • Fragmented regulatory environments, requiring legal, risk, IT and product teams to collaborate more closely than ever.
  • Rising expectations on boards, who are increasingly expected to understand AI risks and opportunities, maintain ongoing literacy, and oversee AI strategy.
  • Pressure to assess vendor and supply chain AI, with transparency, testing and contractual safeguards becoming standard.
  • A shortage of skilled AI governance professionals, increasing the need for internal upskilling and capability building.

Despite these challenges, strong governance is emerging as a competitive advantage. Organisations that invest early in responsible AI frameworks can accelerate adoption, build customer trust and support cross border expansion.

Looking ahead

As we move deeper into 2026, AI regulation will continue to evolve rapidly. New Zealand organisations should stay alert to international developments, strengthen internal governance, and ensure they have visibility over how AI is used across their business and supply chain. 

Special thanks to Alex Lyne for her assistance in writing this article.

Contacts

Related Articles