AI under scrutiny: accuracy, accountability and risk

Artificial intelligence tools are now embedded in everyday professional life. Used carefully, they can improve efficiency and support drafting and research. Used uncritically, they present legal, ethical and reputational risks for organisations and their leadership.
A series of recent decisions and high‑profile incidents across the courts, public sector and professional services industry illustrate the dangers of relying on AI outputs without verifying their accuracy or appropriateness. We explore some of the lessons that can be learned from these recent decisions and incidents.
Key takeaways
- Accountability for accuracy cannot be delegated to AI. Courts and regulators are clear that organisations and individuals remain fully responsible for the accuracy and integrity of material produced with AI, including legal submissions, reports and advice. Unverified AI output can carry serious legal consequences.
- Poor AI controls expose organisations to legal, privacy and reputational risk. Recent incidents across the justice system, public sector and professional services demonstrate how AI misuse can result in fabricated authorities, misuse of confidential information, and loss of trust from regulators, courts and the public.
- Strong AI governance is a board‑level issue. These cases reinforce the need for clear policies, staff training, and robust review processes to ensure AI is used as an assistive tool, and not as a substitute for professional judgement or quality assurance.
AI use and the courts
The Supreme Court recently considered the uncritical use of AI-generated material in court submissions in Jones v Family Court at Whangārei[1]. The self‑represented applicant filed submissions citing a number of legal authorities that did not support the propositions for which they were cited. While the case names were real, they were incorrectly cited and misattributed principles of law.
The Supreme Court made it clear that those filing submissions must ensure their accuracy. Reliance on false citations in serious cases may amount to obstruction of justice under the Crimes Act 1961 or contempt of court under the Contempt of Court Act 2019. While the comments arose in the context of a self‑represented litigant, the Court emphasised that misuse of AI in legal proceedings has serious implications for the administration of justice and public confidence in the justice system. This is unsurprising given that the quality of judicial decision-making depends (in part) on ensuring the sound and accurate authorities are put before the courts.
Similar concerns arose in the Employment Relations Authority decision QTR v BXD[2], where the Authority found that an employee had used a generative AI platform to assist with preparing responses in the Authority investigation process. The Authority identified multiple issues, including the inclusion of hallucinated legal cases, incorrectly cited authorities, and the uploading of confidential and personal workplace information into an AI platform.
The Authority issued a reminder that information provided by generative AI ought to be checked before being relied on in tribunal and court proceedings, making reference to Guidelines for use of generative artificial intelligence in Courts and Tribunals as a consideration going forward.
AI use and the public sector
The Department of Corrections recently identified a small number of instances where staff used AI tools in ways that did not align with its internal policy, whereby staff accessed the free version of Microsoft Copilot to assist with the drafting of formal reports. Corrections’ AI policy expressly prohibited the use of Copilot for drafting, structuring or generation of reports or undertaking assessments containing personal information, and technical controls were in place to assist in enforcing this policy.
Once the issues were identified, Corrections swiftly responded, undertaking a privacy impact assessment and making it clear to staff that misuse of AI is unacceptable.
While no personal information was ultimately disclosed, the incident highlights how easily AI tools can be misused where policy boundaries are not clearly understood, reinforced or monitored. Corrections had appropriate policies and technical controls in place, the challenge was ensuring staff understood where the line sat between appropriate and inappropriate use of the technology. This case illustrates a reality many public sector organisations are facing in trying to harness the many productivity benefits that AI tools offer whilst ensuring staff understand the limitations of the tools. Our previous article on the legal risks associated with the use of AI in public decision-making is available here.
The lesson is not “don’t use AI” but understand that the use of AI does not absolve you from accountability for the outputs. It is important that any use of AI involves appropriate human input and oversight, and that you have a responsible AI governance programme in place that aligns with any legal requirements and enables you to respond quickly where misalignment is identified.
A recent Crown Law Long-Term insights briefing highlights growing concern from Crown Law that generative AI and deep‑fake technology are increasingly capable of producing convincing but false evidence, posing a direct challenge to the integrity of court proceedings. Senior legal advisers have warned Parliament that existing evidential rules may struggle to keep pace, with courts facing difficult questions about authenticity, admissibility and the reliability of digital evidence. While not yet widespread in New Zealand, officials see early warning signs that AI‑driven evidential challenges will become more common as the technology advances.
AI use in professional services
In October 2025, Deloitte admitted that a report prepared for the Australian federal government contained multiple AI‑generated errors, including fabricated academic references and a misattributed judicial quote. Deloitte agreed to partially refund the AUD 440,000 consultancy fee as a result.
Despite Deloitte’s assurance that the substantive findings were unaffected, the incident attracted significant public scrutiny and prompted questions around whether stricter demands around AI use in government and professional services industries.
The Deloitte incident is a clear governance warning for professional advisers: the use of generative AI does not dilute accountability for accuracy, quality or assurance. Despite human review, AI‑generated hallucinations made their way into a high‑value government report, resulting in financial remediation, reputational damage and heightened scrutiny of the firm’s controls.
For boards, the case underscores the need to ask advisers how AI is used, what verification processes sit around it, and where responsibility ultimately rests, particularly for high‑stakes, client‑facing work.
Concluding remarks
Taken together, these instances point to several clear lessons. AI can be a powerful assistive tool, but uncritical reliance carries real risks. The consistent message from courts and public agencies is clear - use AI carefully, understand its limitations, and always check the output.
Get in touch
Please get in touch with our contacts if you have any questions about this article.
Special thanks to Bridget de Lautour for her assistance in writing this article.









