My research list

Loading My Research List ...

Save my research

Don't lose any of your research. Fill out the form below and have your research list emailed to you.

Register to receive our latest publications

Keeping your chatbot in check - legislation governing AI use in public interactions

July 26, 2018

Contacts

Partners Karen Ngan, Jania Baigent
Senior Associates Louise Taylor, Raymond Scott

Digital & new technologies Data protection (inc Privacy Bill and GDPR) Artifical intelligence

Customer service has become an increasingly automated activity. From point-of-sale and touchscreens to automated phone systems and online portals - human employees are commonly reserved for complex interactions and back-up in case an automated system suffers a fault.

Artificial Intelligence (AI) has now taken things up a notch by creating exciting options for organisations to automate complex interactions with customers, in ways that can mimic ‘natural’ human communication.

These options include AI-powered ‘chatbots’, algorithm-based software capable of engaging in fluid verbal and written communication in a conversation-like manner. While there are obvious advantages to using these AI applications, any organisation looking to do so needs to ensure they stay on the right side of the law.

Specifically, they need to make sure any interactions or communications generated by AI applications do not breach NZ intellectual property, privacy, consumer, advertising and anti-spam legislation. The programming and data input of an AI application will form an important part of this compliance process.

Intellectual property laws

Intellectual property law prohibits ‘passing off’ as a competitor or a famous individual. Passing off issues could be triggered if an AI bot pretends to be someone else, or if its avatar resembles a known individual. There could also be an infringement of rights if the AI bot uses copyright works (such as photographs) or registered trade marks (such as registered trade marks for people’s names) without the consent of the rights owners.

An AI bot may be programmed so that it can take on a different personality to suit a particular customer type in order to increase the likelihood of a sale. If an AI bot knows that it can sell more if it pretends to be, say, Prime Minister Jacinda Ardern, then doing so could damage Ms Ardern’s reputation and potentially mislead customers. This could cause not only intellectual property issues arising from the use of Ms Ardern’s name and/or likeness, but also consumer law issues (discussed below).

Organisations should ensure their AI applications are programmed so that they are only able to adopt a defined personality. Any avatars should be designed so that they are unique, or - if they have a likeness of a well-known individual - permission should first be obtained from that individual.

Privacy law restrictions on collecting personal information

It is important to ensure that any collection, use and disclosure of personal information through use of an AI application complies with relevant privacy laws. This might include:

  • Making the user aware that the AI bot is collecting their personal information, and how their information will be used and disclosed. This step is typically achieved by presenting a privacy notice and/or a Privacy Policy to the user.

  • Ensuring an AI application only uses personal information for the purposes for which it was collected, and that it does not use it for any unlawful purposes (eg fraud, or to discriminate against a consumer when supplying goods and services).

  • Ensuring that personal information collected by an AI application is accurate before it is used. This may involve checking verbal transcripts or historical messages between the AI application and users to ensure that personal information is recorded accurately.

  • If AI applications may be collecting personal information from individuals in the EU, care must be taken to comply with the General Data Protection Regulations (often simply referred to as the GDPR - see also GDPR).

Consumer legislation

Businesses using AI in interactions with consumers need to ensure they comply with relevant consumer protection laws, including the Fair Trading Act (FTA) and the Consumer Guarantees Act (CGA).

FTA

Some of the activities that an AI bot will need to avoid in order for the business to avoid being in breach of the FTA include:

  • using an AI application to make calls or engage in other conduct that is harassing or coercing a consumer to purchase goods or services;

  • misleading or deceiving consumers (eg as to the nature, characteristics, purpose of goods and services); and

  • making misrepresentations about goods or services.

Also, organisations should make consumers aware upfront that they are or will be interacting with an AI application in order to avoid any perception of misleading them.

CGA

Generally, goods and services supplied to a consumer must comply with a range of guarantees - which includes being fit for purpose - this includes any particular purpose that the consumer makes known to the supplier.

If an AI application indicates to a consumer that a product is suitable for a particular purpose, or a consumer makes known to a business (including while interacting with an AI bot) that the product is referred for a particular purpose, there is a guarantee that the product will be fit for that purpose. The business will then be accountable to the consumer if the product is not in fact fit for that purpose.

Organisations will need to ensure that:

  • their AI applications are restricted to making permitted representations about the goods/services; and

  • any purposes that consumers make known to the AI application, either expressly or by implication, are conveyed back to the relevant sales or other team in the business.

Advertising Standards Codes of Practice (Advertising Codes)

An AI bot should be programmed in such a way that it will not communicate in a way that could fall foul of the Advertising Codes. The Advertising Codes broadly prohibit advertising that misleads or deceives consumers, or that is not conducted in a socially responsible manner. For example:

  • advertising targeting children must have the child’s best interest as the primary consideration (eg it must not promote inappropriate behaviour or lifestyles);

  • advertising of alcohol must promote responsible drinking, and must not encourage alcohol consumption by minors;

  • advertising of financial investments must not portray unrealistic or exaggerated outcomes; and

  • advertising of gambling must not promise that the participant will win or to gamble beyond their means.

Compliance with the Advertising Codes is voluntary but almost 100%, as organisations generally choose to avoid any negative publicity arising from an investigation by the Advertising Standards Authority.

Anti-Spam Act

Any business using AI applications for marketing purposes will still need to comply with anti-spam law (in New Zealand, this is the Unsolicited Electronic Messages Act 2007). If it is intended that AI bots will be used to carry out mass marketing to promote goods or services by email, SMS or instant messages, it is important that there is a clear understanding of what is permitted under these anti-spam rules.

For example, messages sent by AI applications should only be sent to people who have consented to receive such messages. The messages should have an unsubscribe facility and accurate sender information, and AI applications receiving communications will need to be able to recognise and process unsubscribe requests.

A failure to comply with the anti-spam rules could lead to onerous penalties including fines of up to $500,000.

Conclusion

AI provides business with opportunities for new and innovative ways to engage with their customers, potentially in more efficient and cost effective ways than can be achieved through human interaction. However, the fact that the interaction is undertaken through a bot, and not a human, does not mean that a business can avoid the laws that would otherwise apply to them.