My research list

Loading My Research List ...

Save my research

Don't lose any of your research. Fill out the form below and have your research list emailed to you.

Register to receive our latest publications

Deepfake scams: how organisations can avoid falling victim to a ‘deepfake-out’

September 13, 2019

Contacts

Partners Karen Ngan
Senior Associates Louise Taylor

Data protection (inc Privacy Bill and GDPR) Legal innovation & technology

Just as organisations are starting to get a handle on phishing, whaling and other forms of cybercrime, a sophisticated new criminal threat is emerging using ‘deepfake’ AI technology.

Criminals are reportedly using deepfake tools to defraud organisations by impersonating managers and demanding fund transfers. While there have been no reported cases of deepfake scams in New Zealand to date, organisations need to assume that it will only be a matter of time before these scams are attempted here.

In summary - what you need to know

  • Deepfake technology allows users to produce sophisticated fake images, video or voice recordings of people. They can be very difficult to detect and the technology is widely available

  • In March 2019, fraudsters used deepfake technology to impersonate a German chief executive on a phone call to convince the CEO of a UK subsidiary to make a large monetary transfer

  • Organisations need to be vigilant about the threat of deepfake scams and take steps to mitigate the financial risks – including updated training on digital security threats and checking cyber insurance to policies to ensure they cover loss resulting from deepfake scams

Background: what is Deepfake technology?

Deepfake is an AI-based technology that uses or alters digital media to produce convincing fake images, video or voice recordings of people. The technology is evolving rapidly, thanks in part to investment by tech giants exploring legitimate use cases such as automated phone systems. 

An increasing number of deepfake applications and tools are also now available online without charge, and often require minimal technical knowledge to use. While some of these tools simply become meme-generators for social media, criminals have also quickly realised the opportunity to use the technology for nefarious purposes.

How are deepfakes being used to scam organisations?

The rapid evolution of this technology means that it is becoming increasingly difficult to detect whether or not images or sound are real or not. This makes it perfect for cybercriminals to exploit, because even organisations with highly secure systems and savvy employees may not be able to spot the use of deepfake technology for criminal purposes.

In March 2019, fraudsters used this technology to impersonate a German chief executive on a phone call to convince the CEO of a UK subsidiary to make a large monetary transfer. The payment was made, but the UK CEO became suspicious after receiving a further call from the German parent chief executive, requesting another urgent transfer.

What steps can organisations to protect against deepfake scams?

Tech giants are already researching ways of countering deepfake technology for malicious purposes. For example Facebook, Microsoft and others have established the Deepfake Detection Challenge to fund research and prizes to promote the production of technology to detect images and videos altered by AI. 

Until tools are available to enable us to detect fake video and voice recordings, organisations need to be vigilant about the threat of deepfake scams and take steps to mitigate the financial risks:

  • Keep up-to-date with developments in this area. Deepfake is a rapidly evolving form of technology and so far we have only seen a fraction of its potential malicious exploits.

  • Training on security threats such as phishing scams and other suspicious communications should be expanded to include deepfake scams using voice (and potentially video). Employees and executives should be vigilant about taking any forms of communication at face value, particularly where requests for funds or fund transfers are involved.

  • It is crucial that organisations have robust internal processes, particularly in relation to the transfer of funds. For example, an internal requirement for a secondary payment authorisation above a certain threshold may help organisations to highlight suspicious payment requests.

  • Organisations with cyber or similar insurance policies should check their policy cover and exclusions in order to assess the likelihood that their insurer would cover any loss resulting from deepfake scams – for example, cyber insurance policies may require an event such as systems being hacked, a cyber-attack or data loss before cover is triggered.

Get in touch with our contacts for more information or to discuss minimising the threat of deepfake scams to your organisation. 

Contributors matt.austin@simpsongrierson.com