What you need to know

  • Deepfakes are media that have been manipulated using artificial intelligence, with many potential positive applications, but also often for a deceptive purpose.

  • They are not specifically regulated in New Zealand but there are a range of existing laws that can apply to their creation, content and dissemination.

  • Around the world, we’re seeing countries introduce regulations specifically targeted to minimise misuse of the technology and reduce the harmful impacts deepfakes may have.

  • Deepfakes provide businesses with a tool that can enable them to engage with customers in innovative and creative ways, but it’s important that care is taken to ensure that there is awareness and compliance with the range of laws that can apply to their use.

  • Businesses also need to be aware of the real risks that are presented by deepfakes and take steps to mitigate those risks - this can include educating staff about the risks, exercising greater vigilance around payment authorisations and reviewing insurance cover.

While Photoshopping was all the rage back in the 90s and 00s, a new wave of digitally altered media is sweeping the world. Spotting a Photoshopped image may be hard, but spotting a deepfake is even harder.

Like some photoshopped images, deepfakes can be, and have been, used for nefarious purposes including blackmail, pornography, deception, identity theft, fraud and disinformation. But deepfakes can be said to be taking this to the ‘next level’.

What are deepfakes?

Deepfakes are created through the digital manipulation of images, videos or audio clips with deep learning algorithms. The output is new media with false properties, such as a video showing a person doing something or saying something that was not actually done or said.

Although manipulated media is not a new phenomenon, the emergence of sophisticated software programs coupled with widespread availability has led to a proliferation of deepfakes across the Internet in recent years. While in many cases they are used for innocent purposes (eg face-swaps or music videos), there are also many examples of deepfakes being used for political and malicious purposes.

Prime Minister Jacinda Ardern has been the subject of several deepfake videos, including one in which she is seen to be blowing smoke rings and reacting to an illicit drug’s psychoactive effects. Many viewers misinterpreted it as Ardern actually using illicit drugs, questioning her fitness as Prime Minister of this country.

Overseas, we’ve seen deepfakes used to spread disinformation around the Russian-Ukrainian conflict. For example, a video appearing on the website of a prominent Ukrainian media outlet in which Ukrainian President Volodymyr Zelenskyy appears to be telling his soldiers to lay down their arms and surrender to Russia’s invasion.

Deepfakes can also be dangerous in that they may allow people to dismiss otherwise legitimate videos as ‘fake’. There were rumours that Russian President Vladimir Putin was the subject of a deepfake, with the now-infamous ‘hand through the mic’ video. The rumours caused people to speculate on the president’s health and location, but the video was ultimately found to be legitimate.

These examples show how easily and quickly deepfakes can lead to an erosion of trust and illustrate the gravity of their potential consequences.

Implications for businesses

Deepfakes open up a realm of opportunities for new and innovative solutions offered by businesses, particularly for those in the media and entertainment industry, from deepfakes being used to allow deceased actors to continue playing their characters long after they have passed away to famous personalities using deepfakes to deliver key messages (such as David Beckham speaking nine languages to promote the end of malaria).

However, deepfakes also create risks for businesses. We’ve seen examples of deepfakes being used with the intention of spreading disinformation about companies to influence their stock prices (as was the case of the deepfaked ‘journalist’ Maisy Kingsley). Some of the other key risks presented to businesses by deepfakes include:

  • security risks - such as people utilising synthetic voice or video technology to pretend to be employees or customers and undertaking sophisticated phishing scams;
  • reputational harm; and
  • improper use of proprietary materials.

How do you spot a deepfake?

It is generally quite challenging to spot a deepfake. You may be able to spot a deepfake by paying close attention and looking for tell-tale signs that something is a bit ‘off’. Checking the source is important too - the Jacinda Ardern video was made by a YouTube channel called ‘Genuine Fake,’ which specialises in creating deepfake videos on public figures.

Several technologies have been developed to help spot deepfakes, such as Microsoft’s Video Authenticator which can analyse a photo or video and provide a confidence score that the media is artificially manipulated. However, not all of these technologies are publicly available, and deepfake technologies are developing faster than most detection software can keep up with.

Are deepfakes regulated in New Zealand?

Deepfakes are not specifically regulated under New Zealand law. But there is a range of existing laws that regulate to some extent their creation, content and dissemination. These include laws relating to:

  • privacy and personal information protection, including the Privacy Act 2020
  • electronic media, such as under the Films, Videos, and Publications Classification Act 1993 and the Harmful Digital Communications Act 2015
  • intellectual property protection, including passing off and the Copyright Act 1994
  • misleading and deceptive conduct under the Fair Trading Act 1986
  • criminal offences under the Crimes Act 1961

There is a sense that our current laws are not wholly suited to and do not go far enough to protect the public against the negative impacts of synthetic media. The current laws rely on harm having been caused and the victim taking action retrospectively, rather than, for example, placing responsibility on those who develop, use or deploy the technology to take steps to avoid or minimise the risk of the technology’s harmful effects.

A New Zealand Law Foundation funded report recognised that the current legal framework was not adequate to deal with the challenges that deepfakes bring but did not advocate for specific regulation of deepfakes. Rather the recommendation was that our existing regulatory frameworks be regularly reviewed and updated as new technologies emerge.

There would be a challenge to regulate deepfakes without unduly stifling innovation and free speech. Having said that, there is a case for looking at our current legal framework for where we may be able to make changes to ensure the laws are better unified and seek to minimise harms before they arise.

Enforcement is also seen as a key challenge, particularly where nefarious actors are based outside of New Zealand.

What about overseas?

Some overseas jurisdictions are proposing law changes to combat the negative effects of deepfakes. These developments will no doubt influence the direction of any future New Zealand legislative developments. For example:

  • China is proposing regulations that will require creators to obtain consent before altering or using someone’s text, audio, images or videos for deepfake media. Anyone that does not follow the regulations will be fined, and repeat offenders may be criminalised.
  • The European Union’s proposed Regulation of Artificial Intelligence includes several provisions designed to help combat the negative impacts of deepfakes, including specific transparency requirements for AI systems that generate or manipulate content.
  • Norway has recently introduced regulations requiring altered images or videos to have disclaimers stating that they have been retouched.

Rather than waiting for the law to catch up, some tech companies are taking matters into their own hands. Several social media sites are regulating deepfakes via their terms of use for their platforms. Twitter, for example, requires that all deepfake media be clearly labelled when shown on its platform, and any media that is shared in a deceptive manner and is likely to impact public safety or cause serious harm will be removed, with warnings highlighted.

Where to from here?

No specific regulatory changes are on the horizon. Due to the ever-changing nature of the technology, it would be difficult for specific regulations to keep up. Neither would an outright ban on deepfake technologies seem appropriate, as although they have a number of harmful uses, deepfakes have a range of positive uses. We expect any regulatory changes would need to be introduced via incremental changes to our current laws.

For businesses, deepfakes have the potential to assist them connect with and provide customers with services in innovative and engaging ways, but it is important to ensure that when using the technology, care is taken to ensure awareness of, and compliance with, the applicable laws.

It’s important too for businesses to be aware of the risks presented by deepfakes, and to take steps to mitigate those risks. This includes through educating staff on the risks, encouraging vigilance and enhancing due diligence and compliance procedures (particularly when it comes to authorisations for payments), investing in deepfake-detection technology and reviewing insurance arrangements for potential losses arising from deepfakes.

Get in touch

If you would like to discuss any of the above issues that may affect your business, please get in touch with one of our experts. 

Special thanks to Po Tsai and Hurya Ahmad for their assistance in writing this article.

Contacts

Contacts

Related Articles