In the UK, the Online Safety Bill 2023 (Bill) is imminently receiving the Royal Assent. Its aim is to regulate certain internet services such as Instagram, YouTube and TikTok to prevent objectionable content.

The Bill is a vexed topic with some welcoming it as finally addressing hate speech and harassment, while others see it as is an infringement on freedom of expression. Meanwhile, New Zealand has been looking to reform its own online content laws, with a suggestion that it may look to the Bill for inspiration.

In this article, we highlight the key provisions of the Bill and recap the current status of online content regulation here in New Zealand.

Key takeaways

  1. The Bill requires tech companies in the UK to proactively screen objectionable content and remove it, a stark change from only being required to act after being alerted to objectionable content.
  2. The Bill has triggered intense debate about the balance of freedom of expression and governmental responsibility to protect young people from harmful content.
  3. New Zealand is considering similar steps through its Safer Online Services and Media Platforms consultation which ended on 31 July 2023 (see our article here). We anticipate that the Department of Internal Affairs (DIA) will release its recommendations later this year. It is expected this will feed into detailed proposals for the next Government to consider.


What does the Bill do?

The purpose of the Bill is to regulate online content to reduce hate speech, harassment and other illicit or illegal material, such as terrorist propaganda. It aims to improve online user safety, with a particular focus on protection of children and the vulnerable from harmful content, including fraudulent advertising.

It applies to internet service providers and search engines and will regulate popular platforms such as YouTube, Facebook, Instagram and TikTok, as well smaller services which allow users to encounter content generated, uploaded or shared by others.

The new law will require online platforms to proactively screen for objectionable material and to judge whether it is illegal, rather than requiring them only to act after being alerted to objectionable content. For example, content aimed at promoting self-harm, racism, misogyny or eating disorders will be restricted or reduced in volume from users’ screens.

In short: online platforms are expected to foresee risky content and actively mitigate it. Failure to comply will result in fines up to GBP £18 million or 10 percent of global revenue (whichever is higher). Executives at these companies may also face criminal action.

Controversy: the two sides to the Bill

The Bill was developed over a five-year period, triggering intense debate as to the balance between protection of freedom of expression and privacy and guarding vulnerable users from harmful content.

On the one hand, childrens’ charities and campaigners have labelled the Bill as a key step towards protecting children from dangerous content. The UK Government has gone as far as to declare that the Bill will make “the UK the safest place in the world to be online”.

Conversely, tech companies, free speech activists and privacy groups have criticised the Bill stating that it incentivises companies to remove content, curtailing freedom of expression for millions of users. They argue that requiring tech companies to decide what is acceptable content, and to censor what is not, threatens the essence of a free and open web.

At this stage, the regulations as to how the UK government will police online safety will be determined by OFCOM - the regulator for communications services.

What does this mean for New Zealand?

While the Bill itself applies to the UK, the nature of the internet means it will have flow on effects. Separately to this, the DIA is also actively engaged in this area. They are considering what steps can be taken to reduce harm online.

DIA has concluded that the current online safety regime in New Zealand is not fit for purpose. It notes that the evolution of digital media has significant increased potential harm to New Zealanders from objectionable online material.

The DIA has proposed new codes of practice that will set out specific safety obligations for online platforms. These codes would be enforceable and approved by a newly established independent regulator. Media platforms and industry would need to proactively manage content through adopting the codes, which deliver on the safety objectives set out in legislation.

DIA is expected to complete its analysis of public feedback on its proposals later this year. High-level policy is expected to be proposed to Government soon after.

While the proposal by the DIA differs from the new UK Bill, there is an obvious global trend of placing a more significant burden on tech companies / social media platforms to proactively regulate the content they display, in order to reduce the risk of exposure to harmful content. DIA's earlier consultation also considered international positions and it is likely it has considered the UK's approach.

Get in touch

If you would like to learn more about anything discussed in this article or how it could affect your business, please get in touch with one of our experts.

Many thanks to Amarind Eng and Achi Simhony (Solicitors) for their assistance preparing this article.

Contacts

Related Articles