Bullying, abuse, a lack of accountability and the increasing use of the internet have all fed into a widely perceived need to regulate the internet to ensure the safety of users.
In October 2017, the government published its Internet Safety Strategy green paper. In the paper, the government announced its intention to create a voluntary social media code of practice for online media companies.
Following the green paper, the government’s response was published in May 2018, which included a draft version of the code. It specifically highlighted that there would be a responsibility for social media providers to identify and remove abusive users from platforms, through a mix of human and machine moderation. Additionally, it explained that social media companies would be expected to moderate processes which are used to resource content that matches the platform’s user base. In other words, to change processes so that in the event of harmful content appearing online, users can be directed away from it.
This policy is on its way to being completed. However, the code of conduct is still at draft stage, and until it is up and running, this remains ‘in progress’. The government promised a white paper before the end of 2018 (which has not appeared yet) to “set out more definitive steps on online harms and safety”.
We’ll be watching for that next step in moving this manifesto pledge to ‘done’. Follow this policy for updates.
Love the detail?