What is Agentic AI?

Agentic AI refers to artificial intelligence systems that can independently pursue objectives, make decisions, and take actions without constant human intervention. Unlike traditional generative AI (such as ChatGPT or image generators), which respond passively to prompts, agentic AI acts more like a proactive digital assistant - capable of planning, executing tasks, and interacting with other systems to achieve a specified goal.

Senior Associate Michelle Dunlop spoke to the NBR this week about the legal implications of using agentic AI in the retail marketplace. Read the full interview here [paywall].

Examples of agentic AI systems - ranging from conceptual to already operational - include:

  • self-driving vehicles;
  • automated software development;
  • AI-powered customer service agents;
  • automated robotics in industrial settings.

Recent developments in AI in an online commerce context highlight the growing use of agentic AI. For instance, a user could instruct an AI agent to “book me a trip to Paris next weekend,” and the agent would not only identify options but go on to make reservations and confirm bookings autonomously.

While the potential convenience and productivity gains are substantial, these systems also raise significant legal and regulatory questions - particularly in relation to liability, accountability and compliance.

Liability and accountability

One of the central legal challenges agentic AI presents is determining liability when things go wrong. In the “book me a trip to Paris” scenario, consider what happens if the AI agent:

  • misinterprets the prompt and books a flight to Paris, Texas instead of Paris, France;
  • spends $10,000 on flights when a user had authorised $5,000;
  • makes decisions contrary to user preferences, such as prioritising price over location when selecting accommodation (so you end up on the outskirts of Paris); or
  • books a hotel that doesn’t exist due to hallucinated data.

In each of these cases, the question arises as to who is responsible - the AI developer, the deploying business, or the end user?

While the answer will always be fact-specific, contract law provides a logical starting point in our scenario. Legally binding agreements could be formed with third parties such as airlines, hotels, and travel agents based on the AI’s interactions with their respective online booking platforms. Under New Zealand’s existing laws, provided the usual criteria for forming a valid contract are met, transactions entered over the Internet with ‘click to accept’ or similar acceptance mechanisms are generally enforceable. However, our laws assume that a legal person (individual or corporation) has authorised the transaction, not an autonomous system acting independently. Further, while the contractual formation criteria may appear satisfied, the user’s intent was misrepresented and the transaction doesn’t align with their expectations.

The concept of agency is clearly relevant here. Under our agency law, agents (human or corporate) can generally bind their principals when acting within their authority. But our laws were not designed to accommodate non-human agents acting autonomously. Courts and the legislature may in the future need to consider novel questions such as whether implied authority can be inferred in autonomous decision-making, and whether the user should bear the risks of their agentic AI’s actions.

Determining who as a fault can be difficult in an agentic AI context given the complex supply chains involved in the development and deployment of such AI tools. Even where fault of a developer or a deployer can be established, the extent of their liability is likely to hinge on their applicable contractual terms. Many AI developers, for example, exclude or limit liability for the accuracy and performance of their AI systems. Whether such exclusions are enforceable might depend on considerations such as consumer protection laws and whether the disclaimers are fair and transparent.

Some theorists have queried whether highly advanced agentic AI could itself be held liable. At present, AI agents are not recognised as legal persons under New Zealand law, and in our view, this is unlikely to change in the near term. Nonetheless, international discussions around granting AI a limited legal status - similar to a corporation - are ongoing, particularly in jurisdictions grappling with advanced AI deployments.

Privacy and intellectual property implications

Agentic AI systems typically require access to large volumes of personal information to function effectively. In our scenario, the AI might need to access the user’s passport data, and financial details to book the trip. However, for the AI to make more informed decisions, a user may need to give the AI system greater access to their personal information (eg past purchases or travel preferences).

Other privacy concerns include:

  • whether informed consent from the user is obtained before data is used;
  • whether personal information is being repurposed for AI training without the individual’s knowledge;
  • how individuals can exercise their rights under the Privacy Act when decisions are made autonomously.

On the intellectual property front, complexities arise around ownership of AI-generated works. If an agentic AI system creates something novel - whether a design, invention, or piece of content - who owns the rights? Current IP frameworks do not accommodate non-human inventors or creators, as evidenced by global rulings in the DABUS cases, where AI was denied inventor status for patent purposes in most jurisdictions.

Additionally, there is the ongoing risk of copyright infringement when AI systems are trained on protected datasets or generate outputs that closely mirror copyrighted works.

A case for regulatory reform?

New Zealand currently regulates AI under a myriad of general legal frameworks such as the Privacy Act 2020, Fair Trading Act 1986, Consumer Guarantees Act 1993, and Human Rights Act 1993. There is no AI-specific statute and the Government does not have plans to introduce one.

As agentic AI becomes more prevalent, questions are being raised about whether existing laws are sufficient to manage the unique risks posed by autonomous systems. While wholesale legislative reform may not be imminent, we predict sector-specific regulations are likely to emerge in high-risk areas, including healthcare (where incorrect AI decisions could lead to death or injury), financial services (where AI could cause significant economic harm) and critical infrastructure (where malicious or malfunctioning AI could cause widespread disruption).

This sectoral focus aligns with international trends. The European Union’s AI Act, for example, classifies AI systems by risk level and imposes strict rules on high-risk and prohibited applications, particularly where human rights or public safety are at stake. AI systems that pose unacceptable risks (such as social scoring or biometric surveillance) are prohibited whereas high-risk systems (such as those used in biometric identification, critical infrastructure or law enforcement) are subject to stricter obligations, including risk assessments, transparency, and human oversight.

Key considerations for businesses exploring Agentic AI

Given the fast-evolving nature of agentic AI, best practices are still emerging. However, businesses considering its adoption should take the following into account:

  • Purpose and scope: Clearly define the problem the AI is intended to solve. Avoid adopting technology for its own sake.
  • Due diligence and risk assessment: Evaluate the AI system for safety, reliability, bias, and legal compliance - including IP and privacy issues. Consider pilot testing in a controlled environment.
  • Governance and oversight: Implement clear policies on acceptable use. Maintain a “human-in-the-loop” model for high-impact decisions and ensure a manual override mechanism is in place.
  • Contractual safeguards: Review and negotiate terms with AI vendors to ensure appropriate warranties, indemnities, and liability clauses are in place.

Get in touch

If you would like to discuss any aspect of this article, or need any assistance, please contact one of our experts.

Special thanks to James Burnett for his assistance in writing this article.

Contacts

Related Articles