The European Union’s Artificial Intelligence Act (AI Act) is set to become the world’s first comprehensive legislation governing the development, deployment, and use of AI systems.

In this article we take a look at how the Act will work, what its potential impact on New Zealand businesses will be, and what you need to do to prepare for when the Act comes into effect.

What you need to know

Crucially, the AI Act:

  • is widely expected to be approved in the first half of this year, will enter into force 20 days after its official publication, and its obligations will apply in a phased manner over a two-year period
  • has extra-territorial scope, applying to NZ entities whose AI systems are placed on the market in the EU or whose use of AI systems affects people located in the EU
  • introduces a risk-based classification system that includes an outright prohibition on certain AI systems considered to pose an unacceptable risk
  • has obligations largely targeting ‘high risk’ AI systems and general-purpose AI systems (including generative AI systems like ChatGPT)
  • imposes significant penalties for non-compliance.

What is regulated?

The AI Act regulates “AI Systems”, which it defines as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. This definition is broad and aligns with the Organisation for Economic Cooperation and Development (OECD)’s Recommendation on Artificial Intelligence 2019.

The AI Act also regulates general-purpose AI (GPAI) models, which it defines as “an AI model that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.” GPAI models can perform a variety of distinct tasks and are often referred to as “foundational models” because of their ability to be used as pre-trained models upon which other, more specialised AI systems are built. Open AI’s Chat GPT or Google’s Gemini are examples of GPAI models. Specific regulation of GPAI models was deemed necessary by the EU given the significant risks they pose if they are highly capable and widely used.

Who is regulated?

In terms of who will be captured, the AI Act imposes obligations at all levels of the AI supply chain - ie on providers, deployers, importers, distributors, and product manufacturers of AI systems. The AI Act has extra-territorial scope, applying to entities that market their AI systems in the EU, that have a place of establishment in the EU, or whose AI system’s output is to be used in the EU.

What are its key requirements?

The AI Act will introduce a risk-based classification system that includes an outright prohibition on certain AI systems considered to impose an unacceptable risk. Most of the AI Act’s obligations are targeted at ‘high-risk’ AI systems.

1.    Prohibited AI Systems

AI systems that create unacceptable risk or clear threats to individuals’ safety, health or rights are prohibited under the AI Act. This category includes, for example, AI systems that use social scoring techniques, real-time biometric identification in publicly accessible spaces for law enforcement, and predictive policing techniques.

2.    High-Risk AI Systems

There are two broad categories of high-risk AI systems:

  • AI systems that are used as a product, or the safety component of a product, that is already subject to EU’s product safety legislation (eg medical devices, vehicle security, toys, marine equipment, and certain machinery).
  • AI systems that are specifically designated in the AI Act as high-risk, such as AI systems used in biometric identification, critical infrastructure, educational and vocational training, essential public and private services, and law enforcement. A provider can apply for an exemption if its AI system would otherwise be considered high-risk, if the provider can establish that the relevant AI system does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons.

There are strict obligations on providers of high-risk AI systems, including requirements around risk assessments, data quality, documentation and record-keeping, transparency, human oversight and intervention, and security. Providers must also test their AI systems for compliance with the AI Act before placing them on the EU market and register their AI systems in a publicly accessible database.

There are also obligations on deployers (users) of high-risk AI systems, including requirements to use the high-risk AI system in accordance with the provider’s instructions of use, install human oversight to the extent possible, monitor the input data and operation of the system, and retain certain records (including automated logs for at least six months).

3.    Limited Risk

Limited risk AI systems must comply with transparency obligations (eg informing users that they are interacting with AI when the context is not already clear) to allow users to make their own informed decisions about the AI system and assess whether to continue using it.

Providers of AI systems, including GPAI systems, generating synthetic audio, image, video, or text content, or that constitute ‘deepfakes’ (through the manipulation of audio, image, or video), are required to ensure that the relevant outputs are labelled (in machine-readable format) and detectable as artificially generated or manipulated.

4.    Minimal Risk

These AI systems are those not otherwise falling into one of the above categories. Minimal risk AI systems will not be regulated under the AI Act. However, the AI Act encourages businesses to voluntarily comply with the AI Act.

How are GPAI models regulated?

The AI Act establishes a separate two-tiered structure for regulating GPAI models, with base-level obligations for all GPAI models and additional obligations for those GPAI models that pose a systemic risk.

All providers of GPAI models will have to comply with transparency obligations such as:

  • drawing up technical documentation of the GPAI model (eg its training and testing processes);
  • making available information and documentation to downstream system providers intending to integrate the GPAI model into their AI system;
  • making publicly available a detailed summary about the content used for training of the GPAI model;
  • establishing a policy to comply with EU copyright laws.

Free and open licence GPAI models only need to comply with the latter two obligations above, unless the GPAI model poses systemic risk.

GPAI models that pose a systemic risk have additional obligations, including performing model evaluations, assessing, and mitigating possible systemic risks, tracking, and reporting serious incidents to the European Commission and ensuring adequate cybersecurity processes are in place. A GPAI model is considered to pose a systemic risk if it has high impact capabilities or is identified as such by the Commission. A GPAI model is considered to have high impact capabilities if the amount of computational power, measured in floating point operations (FLOPs), is greater than 10^25. It is estimated that ChatGPT was trained on approximately 10^24 FLOPs.

What are the penalties for non-compliance?

The penalties for non-compliance with the AI Act are significant. The maximum penalty for non-compliance with provisions concerning prohibited AI practices in the AI Act is a fine of up to 7% of the producer’s annual global turnover or 35 million Euros, whichever is higher.

When does it come into effect?

The pre-final text received unanimous final approval by all 27 EU ambassadors on 2 February 2024. The formal adoption vote is provisionally scheduled for 10-11 April 2024 and no major changes to the current draft are expected. The AI Act will enter into force 20 days after publication in the EU Official Journal.

Following publication, the AI Act’s provisions will come into effect in stages, such that:

  • the ban on prohibited practices will apply six months after entry into force;
  • the obligations on GPAI models will start after 12 months (or 24 months if the GPAI model is already on the market);
  • the obligations for high-risk AI systems will apply after 24 months (or 36 months where AI forms a safety feature of a regulated product); and
  • other provisions will be effective after 24 months.

How does the AI Act impact New Zealand?

While the AI Act is primarily aimed at regulating the development, deployment, and use of AI systems in the EU, it has extra-territorial effect, meaning it may apply to New Zealand businesses that provide AI systems to EU customers. Additionally, global businesses may decide that their local subsidiaries will voluntarily comply with the AI Act as a matter of policy, where it represents the ‘gold standard’ to do so.

As the first comprehensive AI legislation in the world, the AI Act is likely to set a precedent for global AI governance and influence the development of AI regulations in other jurisdictions, including New Zealand. The use and supply of AI systems in New Zealand are regulated through existing laws (for example, privacy, data protection, copyright, human rights, consumer protection, and harmful digital communications laws) and to date no comprehensive AI legislation has been tabled. However, we are seeing an uptick in non-binding guidance around the development, deployment and use AI being issued by regulators and industry groups (for example, the Office of the Privacy Commissioner’s guidance on privacy expectations around AI use), to help fill perceived gaps in our legislative approach. New Zealand legislators will no doubt be closely monitoring the regulatory approaches of EU and our other key trading partners (for example, Australia where the Federal government has indicated it may look to implement legislation that controls the use of AI in high-risk settings and China where the state council recently announced a comprehensive AI law is on the legislative agenda) when considering how to further regulate AI here.

What should businesses do to get ready?

Businesses who are developing and deploying AI would be well-advised to start preparing for the AI Act, including by:

  • Auditing the use, development, and supply of AI systems within your business and its supply chains.
  • Mapping and documenting relevant processes (eg databases, training, cybersecurity).
  • Considering the level of risk your business’s current and/or proposed AI systems will likely fall under for the purposes of the AI Act. Read and understand the applicable requirements.
  • Conducting privacy impact and algorithmic assessments before AI systems are implemented/developed and putting in place procedures for ongoing risk identification.
  • Reviewing existing contractual arrangements with third party providers and suppliers.
  • Taking note of the staggered compliance deadlines for enforcement under the AI Act.

Get in touch

If you have any questions regarding the EU’s AI Act or how it may impact your business, please get in touch with one of our experts.

Special thanks to Harrison Brown and Priya Prakash for their assistance in writing this article.

Contacts

Related Articles