AI use across the public sector is increasing at a remarkable pace. The Department of Conservation uses it to assess land-use applications… ACC uses it for real-time call centre support… MBIE has talked about utilising it for procurement decisions… the Government is urging Pharmac and MedSafe to use AI to determine which medicines to approve… parliamentary select committees and local authorities are embracing it to review submissions made by members of the public on key issues…  

The list goes on. A survey from the Chief Digital Officer highlights that 70 government organisations in New Zealand are now using AI in some form.

The use of AI in public decision-making brings both opportunities and risks. Troublingly though, the law regarding how AI can be used by public decision-makers is unclear and untested. That uncertainty – coupled with the rapid uptake of AI, its role in important decisions, and low public trust – creates a high likelihood that decisions made with the help of AI will be challenged.  

Last week Litigation Partner Nick Champan spoke to RNZ's Kathryn Ryan on Nine to Noon about how this uncertainty is likely to lead to complex and high-stakes litigation, where the courts are asked to rule on what AI use in the public sector is welcomed, and what is a step too far. Listen to the full interview here.

Opportunities for AI use by public decision-makers

We’ve previously written about the Government’s intention to take a “light touch, proportionate and risk-based” approach to regulating AI.

Consistent with that, Ministers and the Government generally have been quick to direct government departments and public agencies to make better use of AI in their day-to-day activities. That’s unsurprising given the possible benefits on offer when it comes to utilising AI to make decisions:

  • Efficiency and scale: The public sector is increasingly being asked to ‘do more, with less’. AI offers a possible solution to that conundrum, particularly as regulatory environments become more complex. In particular, faster processing times and being able to handle large volumes of information create the ability for the public sector to move at speed.
  • Transparency: AI systems can provide auditable trails, making decision-making processes (potentially) more open to scrutiny. Of course, the degree of transparency is not a straightforward issue. Previously, the courts have been critical of public authorities employing software where commercial sensitivity concerns meant they weren’t able to properly assess whether or not the software infringed relevant rights.[1] 
  • Fairer decision-making… maybe: Any decision-maker carries unconscious bias, and the use of AI may be able to guard against that. However, AI decision-making depends on the data it is provided with. If that data contains errors or bias, it is likely those will be perpetuated.
  • Consistency: AI may help to standardise decisions across similar cases, reducing variability and producing fairer outcomes as a result. 

The likelihood of challenge and the possible grounds for doing so

New Zealanders have low trust in AI. A recent KPMG report found that less than half of New Zealanders surveyed believe that the benefits of AI outweigh the risks – which was the lowest ranking of the 50 countries KPMG surveyed.[2]

Low trust means that public decisions using AI are likely to be challenged by disaffected parties. The unsuccessful company to a tender process… patients whose medicine isn’t publicly funded… a commercial party that has a licencing or approvals application turned down… public submitters who feel their views have not been sufficiently taken into consideration. The list is endless.

The risks of AI use across the public sector also act as possible grounds for judicial review:  

  • Material errors: AI systems can produce incorrect or misleading outputs (‘hallucinations’). Additionally, decisions are often tasked to specific individuals within the public sector because of the qualifications, expertise and experience they have, which may provide a degree of judgement that the relevant AI doesn’t have.  
    Bias and discrimination: As noted above, AI may perpetuate errors or prejudices embedded in historical data, leading to discriminatory outcomes.
  • Procedural fairness: There are concerns where individuals have a right to be heard through a decision-making process, and where AI undermines this. By way of a simple example, a party that prepares a 30-page submission may expect that submission will be read in full by a human and not simply summarised by a computer programme.
  • Improper delegation: Legislation and regulations give decision-making powers to particular individuals across government. Over-reliance on AI may lead to questions about who (or what) is actually making the relevant decision, and whether that’s consistent with the overarching power. 
  • Legislative ambiguity around the use of computer systems: A number of statutes specifically allow particular government departments and public entities to utilise “automated electronic systems” for decision-making.[3] These allowances may give rise to more questions than they answer. For example: have departments and entities covered by such legislation followed the correct process? What about departments and entities that don’t have an express allowance in their legislation? Are they entitled to use automated electronic systems in decision-making, or do they require parliamentary approval before doing so?

AI is already re-shaping public decision-making, but what the law allows and prohibits remains untested. How that balance is struck is likely to be a question for the courts – and it’s only a matter of time before they’re asked to answer it.

If you’d like to talk to one of our experts about the possible impacts of AI in public decision making and what it could mean for you or your organisation, please get in touch.


 


[1] For example, in R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058 the use of facial recognition software was successfully challenged in circumstances where the relevant police department could not properly test and analyse the underlying data of the software it was using. 

[2] KPMG ‘Trust, attitudes and use of artificial intelligence: A global study 2025’. 

[3] See, for example, the Social Security Act 2018, Customs and Excise Act 2018, and the Summary Proceedings Act 1957.

 

Contacts

Related Articles