Major parties staying silent on social media regulation

By Dr Jennifer Beckett
Lecturer, School of Culture and Communication, University of Melbourne

Tagged:

election; policy Election; Policy oceania; australia Oceania; Australia

In the lead up to the announcement of this Federal election, a white supremacist launched attacks on mosques in Christchurch, killing 51 people and injuring another 49.

Unlike other attacks of its nature in the past, the killer livestreamed the violence on Facebook.

Since then, there’s been much public discussion about the level of culpability social media platforms had in the spread of the attacker’s message and also in incubating his attitude.

In response, Australian Parliament quickly passed the new Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 which requires that internet service providers, content providers and hosts (that is, social media companies) “ensure the expeditious removal of [abhorrent violent material]” from their sites.

At the time, Shadow-Attorney General Mark Dreyfuss, acknowledged the Labor party had “serious concerns” about the policy, citing its “poor [drafting]” and fears that it “would not achieve its intended purpose” though ultimately did not stand in its way.

These concerns have been echoed by others, with particular focus given to enforcement (for example, what exactly constitutes “expeditious removal”) and the manner in which fines for breaches are determined. All of which I tend to agree with.

But live streaming of “abhorrent violence” is not the only issue to consider this coming election when it comes to social media.

Dangers to the democratic process

For the first time we are headed to the polls in an age of hyper-partisanship, easily shareable fake news - which we have already seen this election - and psychographic profiling. All pose demonstrable dangers to the democratic process, as the Cambridge Analytica scandal, Brexit, and alleged Russian tampering in US elections has shown.

In addition, issues such as cyberbullying, harassment and image-based abuse’ are high in the public consciousness.

The Prime Minister Scott Morrison has promised to introduce to increase the maximum penalties for trolling and introduce new offences related to the sexual abuse of children online.

However, there seems to be little understanding within the Coalition and across the political spectrum when it comes to:

  • the regulation of social media platforms
  • the rise and spread of fake news
  • the abuse and security of user data
  • the fomentation of hate speech and
  • how cyberbullying and harassment play out in the real world, and their impact

All of these issues significantly contributed to create the environment in which a tragedy such as Christchurch could occur.

But the major parties, with the exception of The Greens who are calling for an inquiry and regulation of social media, have failed to make policy connections to better online governance, whether in relation to data handling or content regulation.

Protecting your data and combatting fake news

This is not the case in other areas of the world, particularly the European Union.

In the last year new legislation has come into force to protect user data via the EU General Data Protection Regulation (GDPR), including the right to be forgotten. This regulation has already affected global internet practice, with the most familiar of these being the cookie collection notifications we now see on all websites.

In order to combat the spread of fake news, particularly in the lead up to the European elections, the EU has also requested monthly reports from Internet Service Providers (or ISPs) which will be made public.

This is part of their Code of Practice on Disinformation, a self-regulatory code of practice for service providers, to which Facebook are a signatory.

Regulating speech and conduct online

In Germany, the ‘network enforcement law’ (NetzDG) applies similar force to content hosts .

Where the Australian legislation focuses on only visual violence, NetzDG extends to other “obviously illegal” content such as hate speech and defamation. Unlike the Australian legislation criterion of “expeditious” removal, NetzDG allows content hosts 24 hours to remove content after notification or up to seven days for complex cases.

Hosts are also expected to provide transparency reports on the type of content they’re moderating and why.

While neither the GDPR nor NetzDG are perfect measures, they were born from more nuanced conversations and coherent policy positions than the Australian legislation does.

Creating good policy

That ISPs need regulating no longer seems to be up for debate, even Facebook CEO Mark Zuckerberg himself has called for regulation, but we should be wary of knee-jerk legislation that misses the wood for the trees.

We should also be wary of falling into the trap of regulating social media providers in ways they want to be regulated.

What we need from our politicians instead is good policy from which to create informed legislation.

This includes a commitment to international cooperation on global policy and proper resourcing as well as training for those who deal with the unintended consequences of platform mis-governance including police, social workers and educators.

All of which belongs to public discourse parties have not properly begun to engage in and policy we have yet to see.

Banner image: Close up view of Facebook Watch icon on the smartphone screen. Facebook Watch is a video on demand service operated by Facebook. Via Shutterstock.

Tagged:

election; policy Election; Policy oceania; australia Oceania; Australia

Election Watch: Past Editions