My research list

Loading My Research List ...

Save my research

Don't lose any of your research. Fill out the form below and have your research list emailed to you.

Register to receive our latest publications

Doing business with AI - a heads up for business leaders

July 19, 2019

Contacts

Senior Associates Louise Taylor

Artifical intelligence Digital & new technologies Data protection (inc Privacy Bill and GDPR) Fintech

Artificial intelligence (AI) is revolutionising business in New Zealand. Applications are already having a huge impact in areas such as fraud detection, customer support, cyber security and data analytics.

However, organisations looking at using or developing AI technology need to anticipate – and mitigate – the reputational risks involved before deploying it. We outline some of those risks below and provide tips on how to manage them.

Doing business with AI - what you need to know

  • Understand how reputational risks can arise when using or developing AI
  • Build trust in AI technology with customers, employees and other key stakeholders
  • Understand the risk of algorithmic bias
  • Keep up with global best practice for AI product design and data ethics to stay in step with public expectation

Build trust with key stakeholders

Customers and employees can react negatively when they feel organisations have failed to be as transparent about what AI technology is being used and the impact it could have on them. Conversely, they are more likely to accept AI technology use when organisations do act transparently.

For example, facial recognition technology is transforming surveillance and identity checking, but it can also be highly contentious amongst its targets. An organisation using this technology would need to weigh up the benefits against the potential impact of negative public perception, and take mitigating steps as appropriate. One way of doing this is being transparent about its use.

Recent negative public reaction against a supermarket operator’s in-store anti-theft measures highlighted the danger to corporate reputation from a quiet roll out of facial recognition technology. Customers and commentators expressed privacy-related concerns, and continuing media coverage has focused on the operator’s lack of transparency about which of its stores are using the technology.

If your business is going to use AI technology for surveillance, also ensure that you are complying with your obligations under privacy laws and guidelines when collecting, using and disclosing the information (eg the Privacy Commissioner’s CCTV guidelines and Principles for Use of Data and Analytics). Also check if the European GDPR principles apply, and comply strictly with them if they do.

Take an ethical approach

Tech giants like Amazon have been in the spotlight recently after being challenged by employees, shareholders and others over the ethical implications of AI technology they’ve developed. While this media attention has contributed to a growing unease about certain AI applications, it has sparked conversations worldwide about the importance of using AI technology in an ethical way.

In response, businesses are starting to consider more than pure profit when investing in AI technology. Corporate governance standards in NZ, Australia, UK, Europe and the US also increasingly demand that companies consider the views of customers, employees and other stakeholders when making business decisions.

Business leaders should develop and promote policies regarding AI use that are aligned with best practice. If customers, employees or other stakeholders are raising red flags about an AI application, then listen to them, engage with their concerns and respond appropriately. Businesses that get this right stand to gain reputational and financial benefits.

Understand the risk of algorithmic bias

The risk of ‘algorithmic bias’ in AI technology is another issue under the media spotlight. Algorithms are essentially the operating instructions of AI technology. Through their design, the data fed into them, or other factors, algorithms can end up making errors that favour – or disadvantage - one group of users over another.

For example, Amazon’s facial recognition technology “Rekognition” has received negative media attention following claims that the software is making racially-biased, and therefore inaccurate, decisions. Examples like this underline the need for businesses to understand the algorithms and data powering their AI applications, or risk damage to their reputation and bottom line.

Ensure compliance with privacy obligations

Stay informed about how your organisation is using or plans to integrate AI technology within its operations, and ensure that this use will comply with applicable privacy laws.

If your organisation is developing AI technology, it should implement privacy-by-design principles from the outset. This embeds privacy principles into the architecture and design of technology, and can help organisations meet their legal compliance obligations and minimise the risk of privacy breaches.

Although there is nothing specific in NZ’s Privacy Bill about privacy-by-design, this has been considered best industry practice for a while. It has also now been enshrined in the GDPR, which is already having a flow down effect globally - having raised the bar for privacy standards, consumers worldwide will start to expect these as standard. By adopting global best practice early, businesses give themselves the best chance to stay in step with public expectation.

See also our previous comments about ‘chatbot’ compliance in our [July 2018] FYI: https://www.simpsongrierson.com/articles/2018/keeping-your-chatbot-in-check-legislation-governing-ai-use-in-public-interactions]

 

Contributors maddy.rowe@simpsongrierson.com