Home Government & Policy U.K. Government Considers Stronger Regulation of Tech Platforms After Online Disinformation Fuels...

U.K. Government Considers Stronger Regulation of Tech Platforms After Online Disinformation Fuels Violent Disorder

U.K. Government Considers Strengthening Tech Platform Regulation in Response to Online Disinformation

The U.K. government is considering seeking stronger powers to regulate tech platforms after days of violent disorder across England and Northern Ireland fueled by the spread of online disinformation. Prime Minister Keir Starmer confirmed on Friday that there will be a review of the Online Safety Act (OSA), which was passed by parliament in September 2023. The OSA puts duties on platforms that carry user-to-user communications to remove illegal content and protect users from other harms, such as hate speech. Non-compliance can result in penalties of up to 10% of global annual turnover. Starmer emphasized that those who incite hate online are already facing consequences, but he acknowledged the need to take a broader look at social media after the recent disorder.

The review of the OSA was prompted by criticism from London Mayor Sadiq Khan, who called the legislation “not fit for purpose”. Violent disturbances erupted across cities and towns in England and Northern Ireland following a knife attack that killed three young girls in Southport. False information about the attacker, including their identity as a Muslim asylum seeker, quickly spread online, amplified by far-right activists. Disinformation about the killer’s identity has been linked to the civil unrest in the country.

In response to the attack, a British woman was arrested for making false social media posts about the identity of the attacker, under suspicion of stirring up racial hatred. While the government’s priority for now remains arrests related to the civil unrest, the question of how to address tech platforms and other digital tools used to spread disinformation remains. The OSA is not yet fully operational, as the regulator is in the process of consulting on guidance. However, the legislation has faced criticism for being poorly drafted and not addressing the underlying business models of platforms that profit from driving engagement through outrage.

In 2022, the previous Conservative government made revisions to the OSA, removing clauses focused on tackling “legal but harmful” speech, which is typically where disinformation falls. The government claimed it was responding to concerns about the bill’s impact on free speech, but former minister Damian Collins disputed this, suggesting that the removed provisions were intended to ensure platforms enforce their own terms and conditions, particularly in situations where content risks inciting violence or hatred.

Mainstream social media platforms, such as Facebook and X (formerly Twitter), have terms and conditions that prohibit harmful content, but the enforcement of these standards is not always clear. Platforms have often relied on plausible deniability, taking down content only after it has been reported to them. However, a law that regulates the resources and processes platforms are expected to have in place could force them to be more proactive in stopping the spread of toxic disinformation.

The European Union has already initiated a test case against X, investigating the platform’s approach to moderating disinformation since December. The EU stated that X’s handling of harmful content related to the civic disturbances in the U.K. may be taken into account in its own investigation. Once the OSA is fully operational in the U.K., it is expected to exert similar pressure on larger platforms’ approach to dealing with disinformation. The Department for Science, Innovation and Technology stated that under the current law, the biggest platforms with the most requirements under the Act will be expected to consistently enforce their own terms of service, including prohibiting the spread of misinformation.

Exit mobile version