Automation under review: A Court of Appeal warning to decision makers - and implications for the use of AI

Nick Chapman's spoke to NBR about this case, see his interview here [paywall] published on 11 March.
In brief
A recent Court of Appeal decision has put government agencies on notice: automated and AI-driven decisions are not beyond judicial review. The Yorston v Attorney General [2026] NZCA 15 case involved errors in an automated convictions history report (CHR). The Court rejected the submission that the errors were beyond the scope of judicial review because they were created by an automated system. Instead, it found that the retrieval of CHRs is an exercise of executive power, and is therefore open to judicial review. The Court was clear that the Ministry of Justice retained full control and responsibility for the automated case management system and bore ultimate responsibility for its accuracy.
While the appeal ultimately failed due to the issues being moot, this ruling highlights that the use of automation does not insulate government actions from judicial scrutiny. The decision indicates increasing scrutiny of e-outsourcing, automation and AI. This will be particularly interesting where the technology is beyond the supervision of the decision-maker, highlighting the need for careful contractual construction with any third-party providers.
Background
The appellant sought judicial review of the Ministry’s production of his CHR and criminal and traffic history report, alleging that they were inaccurate and had been relied upon to his detriment in later proceedings and in job and tender applications.
The CHR is generated by Ministry software drawing on information from the courts’ case management system. In this case, the Ministry accepted that the system contained an error which conflated conviction dates with sentencing dates, resulting in inaccurate reporting of the number of the appellant’s convictions. That error was subsequently corrected, including by removing the conviction date column entirely from the appellant’s CHR.
Justiciability of automated systems
The High Court had found that populating a CHR through software was not a “statutory power of decision” under the Judicial Review Procedure Act 2016 and was also not an exercise of public power as it was “wholly administrative in nature” and “essentially mechanical”.
While the fact that the errors had been corrected disposed of the appeal (on the basis that the underlying issues were moot), the Court of Appeal disagreed with the High Court’s reasoning. The Judges questioned the High Court’s characterisation of CHR production as non‑reviewable, observing that:
- CHRs are generated by computer systems adopted, managed and operated under the authority of the Ministry of Justice;
- The underlying case management system is used daily by police, courts and judges, including for sentencing; and
- The system operates as part of the executive powers of government, rather than as a purely clerical function.
Against that background, the Court expressed reservations about treating such processes as merely administrative. Drawing on authority emphasising that all exercises of public power are, in principle, reviewable, the Court found that the production of CHRs was the result of the exercise of executive powers of Government and “must be amenable to [judicial] review”. It suggested that the real issue is not whether review is available at all, but the scope and limits of that review.
The Court did not finally determine the point, as it was unnecessary to do so on a moot appeal, but left the issue open for future cases.
Interaction with the Privacy Act 2020
The Court reaffirmed that the Privacy Act 2020 provides a separate and comprehensive regime for correcting inaccurate personal information, including complaint processes through the Privacy Commissioner and proceedings before the Human Rights Review Tribunal.
Judicial review remains available only in limited circumstances and is distinct from challenges to decisions made on the basis of incorrect information, which may be addressed through privacy law mechanisms.
Why this matters for government use of AI and automated tools
Although the case concerned case management software rather than artificial intelligence, the Court of Appeal’s reasoning has clear implications for the increasing use of AI and automated decision‑support systems across government.
In particular:
- Automation does not equal immunity: Labelling a process as “mechanical” or system‑generated will not necessarily place it beyond judicial scrutiny. Where automated systems operate under statutory authority and materially affect rights or interests, they may still involve an exercise of public power.
- Accountability follows system design: Errors arising from software architecture, data handling, or system logic remain the responsibility of the public agency that adopts and deploys the system.
- Reviewability may shift to system governance: Future challenges are likely to focus less on individual outputs and more on how automated systems are designed, validated, monitored and corrected.
For agencies deploying AI‑enabled tools, this decision reinforces the importance of robust auditability and error‑correction pathways, particularly where outputs are relied on in regulatory, enforcement or adjudicative contexts.
What agencies should keep in mind
Government agencies using AI or automated decision‑making tools should consider:
- Mapping automated systems to statutory functions
Identify where AI or automated tools are used to support, inform or generate outputs relied on in statutory or executive decision‑making, even if no human discretion is exercised at the output stage. - Treating system outputs as exercises of public power
Assume that automated outputs may be reviewable where they materially affect rights, interests or legal processes, and design governance accordingly. - Embedding accuracy and correction mechanisms by design
Ensure systems include clear processes for detecting errors, correcting outputs, and explaining changes - particularly where personal or legally significant data is involved. - Maintaining auditability and explainability
Keep records of system logic, data sources, updates and known limitations so agencies can explain how an output was produced if challenged. - Aligning AI governance with Privacy Act obligations
Ensure AI systems support, rather than undermine, obligations to keep information accurate, up to date and not misleading, and integrate Privacy Act correction pathways into operational workflows. - Regularly reviewing and testing automated tools
Conduct ongoing validation and assurance to identify systemic errors before they affect downstream decisions or proceedings.
Get in touch
If you would like to discuss any of the above, please get in touch with one of our experts.
Special thanks to Claire Boniface for her assistance in writing this article.







