Recent rioting and unrest in the UK has sparked calls for the Online Safety Act to be revisited. Mayor Sadiq Khan has criticized the act as “not fit for purpose”, while Cabinet Office minister Nick Thomas Symonds has suggested that the government may consider changing the law. The act, which was passed under the previous government, includes measures relevant to the recent riots, such as powers to fine social media companies.
Prime Minister Keir Starmer has been less vocal about the act, stating only that he would “look more broadly at social media after this disorder”. His spokesperson indicated that the act is not currently under active review.
In the wake of the recent riots, social media platforms played a significant role in coordinating events across the country. These platforms also served as a means through which misinformation and hateful rhetoric spread rapidly.
Enforced by the independent media regulator Ofcom, the Online Safety Act focuses on regulating online speech to protect users from potential harms, including abuse, harassment, fraudulent activity, and hate offences. The act places more responsibility on social media companies to ensure their platforms are safe, with fines of up to 10% of their annual revenue for providers whose platforms are deemed unsafe.
Specifically, the act aims to hold social media companies accountable for the safety of their platforms. In more extreme cases, Ofcom has the authority to require advertisers and internet providers to cease working with platforms that do not comply with the regulations. While the act was passed into law in October 2023, some provisions are not set to come into effect until late 2024.
One key concern surrounding the act is the role of algorithms in recommending content on social networking platforms. Algorithms may have inadvertently propagated harmful content related to the riots, including racist, hateful, and violent material. The act seeks to address this issue by requiring platforms to test the safety implications of their recommendation algorithms to minimize exposure to illegal content.
A major challenge in regulating online content is the reluctance of platform providers to act as “arbiters of truth”. The act addresses this by utilizing the independent regulator Ofcom to enforce and regulate online content and algorithms, ensuring political neutrality in the implementation of the act.
Despite the measures outlined in the Online Safety Act, challenges persist, particularly regarding misinformation and disinformation. The spread of false information following the recent riots underscores the need for stronger legislation in this area. Until the act is fully enforced, there remains uncertainty around what can be litigated against online.
Ultimately, the effectiveness of the Online Safety Act will be tested in future situations like the recent riots. It is crucial to monitor the implementation of the act and assess its impact on regulating online content and ensuring the safety of users.
In conclusion, the Online Safety Act represents a step towards addressing the challenges posed by online platforms in the UK. By revisiting and strengthening this legislation, the government can better protect users from harmful content and promote a safer online environment for all.