What we are calling for

Next year the government has promised to introduce a new Online Harms Bill aimed at, amongst other things, tackling online abuse. Below are our six tests for an effective Online Harms Bill.

The Online Harms White Paper implied that the Bill will not contain an intersectional definition of online harms. This would be a grave mistake. A broad definition will inhibit the identification of and action to end some of the worst forms of online abuse. 

We recommend that the Online Harms Bill complies with the Equality Act and ensures that users of social media are protected from any abuse, but especially abuse based on: age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, and sexual orientation.

Facebook, Twitter, and other social media companies have created platforms that help the spread of hate-speech and misinformation. They must take responsibility for this. A legal mechanism should be created to ensure social media companies have a duty of care to their users. 

This need not be unduly onerous. Most social media companies already have guidelines on abuse, bullying, and harassment. The problem is that these rules are only sporadically enforced. We are asking that they be robustly upheld and adhered to.

The Online Harms White Paper contains a proposal to create a new and independent regulator of social media companies. We welcome this but caution that the regulator must be properly resourced and empowered. It should be well-staffed and well-funded and given proper legal standing. It should also operate independently of government.

“Below the line” comment sections on online news stories are routinely filled with homophobic, racist, and sexist abuse, death threats, bullying, and harassment. The Online Harms Bill offers an opportunity to close off this very public corner of the internet from hate speech. Just as a legal duty should be created on social media companies to protect the wellbeing of their users, so newspapers should become legally responsible for removing hate speech from the comment sections of their online stories.

Evidence shows that the majority of abuse and misinformation online comes from anonymous accounts. This needs to be addressed - but it must be done with sensitivity for the fact that whistleblowers and other potentially vulnerable individuals may need to use an anonymous account to protect their identity. 

We believe there is an approach that could balance these competing demands. Social media users with a UK IP address should be required to register their full name and address when creating an account. When signing up they should be offered the opportunity to create a “verified” or “unverified” account. To gain verified status should require a user to upload a photo and/or other piece of personal identification. Verified users should then be offered the opportunity to exclude from their social feeds unverified users. 

The Online Harms Bill should require social media companies to, at the very least, disclose: 

  • The number of complaints reviewers they employ
  • The training they undergo
  • The welfare support they provide to reviewers (since they are routinely looking at disturbing content)
  • The number of upheld complaints
  • A summary of the rationale provided to users for complaints which are not upheld.