AI-assisted hiring: where businesses win - and where they get caught out

Recruitment is being rapidly reshaped by automation. Candidates are using AI to draft applications, and employers are using AI to shortlist talent. For businesses, AI can conduct high-volume screening at speed, leading to faster hiring and more consistent recruitment processes. But AI also introduces legal and reputational risks if it is treated as a substitute for careful human assessment.
AI has slipped into every stage of hiring
AI is being used to scan and rank CVs against role criteria, automate candidate communications and scheduling, assess recorded video interviews, and in some cases predict a candidate’s “fit” based on historic hiring data. For example, Spark New Zealand has described using an AI ‘smart interviewer’ tool (via Sapia.ai/PredictiveHire) in contact-centre recruitment to streamline early-stage screening and shortlisting at scale.
For HR teams, the benefits are clear: faster triage, fewer admin headaches, and the same rules applied to everyone. With New Zealand’s unemployment rate currently sitting around 5.4%, we are hearing from employers that their applicant pools are large and, at times, overwhelming. This makes the pressure to automate early-stage screening even more understandable.
Supporters say this consistency reduces unconscious bias. Sometimes it can. But consistency is not the same thing as fairness, especially when the “rules” are learned from yesterday’s hiring decisions. If the AI tool you are using is based on your organisation’s past hiring patterns, and those hiring decisions tended to favour certain schools, career paths, gender, or gap-free CVs, the model will replicate that at scale. What looks objective can reward people who know how to speak the system’s language, while screening out candidates whose strengths don’t fit neat categories.
In some cases, this type of screening becomes unlawful discrimination. If candidates are screened out due to a protected characteristic (such as sex, family status or race), an employer may ultimately be held liable for the actions of their discriminatory AI bot. Note that AI tools used for recruitment purposes are considered “high risk” under the European Union’s AI Act, which is considered to set the bar in terms of AI regulation. The AI Act imposes a range of substantive operational and compliance obligations on employers who use AI tools for recruitment, including requirements of transparency about use, human oversight and continuous monitoring.
Candidates are using AI too
Candidates are increasingly use AI tools to draft CVs and cover letters, optimising for applicant tracking systems and keyword screening. Some even try to game the system (such as by embedding hidden keywords in white text) to influence automated screening tools. AI can help people present their experience more clearly, but it also means applications arrive polished, uniform and harder to tell apart.
As candidates’ written materials are increasingly assisted by AI, and early screening is increasingly automated, organisations may need to adjust how they assess an individual’s genuine skills and potential. That could be a shift towards assessments that better reflect real work, such as introducing more practical tasks, work samples, structured reference checks and panel interviews.
AI interviews: useful, but only as a first filter
AI-led interviews are becoming increasingly common. Candidates respond to structured questions, often by video, and their answers are summarised or scored before a person watches the recording. For example, Qantas has described using Sapia.ai’s chat-based interview at the early stage of recruitment, where candidates answer the same structured questions and responses are scored by AI to support shortlisting.
Used well, structured and technology-enabled interviews can improve consistency. However, risks arise where scores or summaries are treated as determinative, or where the process screens out those quirky or more diverse candidates before a human has reviewed for nuance and transferable skills.
In our view, a blended process is a good compromise. Let automation handle high-volume triage, but keep human-led interviews (and, ideally, more than one interviewer) for assessing communication, empathy, adaptability and real-world problem solving. For higher-impact roles, we recommend preserving a meaningful human conversation early enough to test judgment, values and communication.
What does responsible AI hiring look like?
As a starting point, if you are using an AI tool you should be able to describe what the tool does and does not do, what inputs it relies on, what it is optimising for, and where human decision-makers intervene. That discipline is not just about legal compliance. It also supports better hiring decisions and clearer accountability.
In practice, that often means testing tools for bias and false negatives, validating them against the specific role (rather than generic “fit”), being clear with candidates about where AI is used, avoiding fully automated decision-making, and meeting privacy and employment-law obligations.
For New Zealand employers, the path forward is less about choosing between humans and machines, and more about combining them well to ensure that those candidates with individuality and diversity are not falling through the cracks in the algorithm.
Read the article in BusinessDesk published last Friday: AI-assisted hiring: where businesses win – and where they get caught out | BusinessDesk [paywall]
Special thanks to James Burnett for his assistance in writing this article.










