Social media: Duty of care expected to include ‘human moderation’ of harmful content

Written by Sam Trendall on 11 January 2022 in News
News

Minister says that regulator is likely to require firms to use both manual and automated checks

Credit: PxHere

New regulations requiring social networks to protect users from harmful content will necessitate human moderation, as well as automated checks, a minister has indicated.

Online safety laws proposed by the government would require social media firms to ensure harmful content is identified and removed quickly. The failure to do so could result in fines potentially stretching to billions of pounds, to be imposed by Ofcom, which will assume the role of the UK’s online harms regulator.

As part of which, the communications watchdog is to recommend practices and procedures that online platforms should follow in order to meet their new duty-of-care obligation. This will include methods of monitoring and moderating content which, according to digital economy minister Chris Philp, will likely feature a requirement for human inspection – not just automated checks.


Related content


Under the draft Online Safety Bill, social media companies will have new duties to protect their users from harmful content such as online abuse,” he said, in answer to a written parliamentary question from Scottish National Party MP Kirsten Oswald. “Ofcom, as the independent regulator, will recommend proportionate systems and processes, including for content moderation, that social media companies should put in place to fulfil these duties. We anticipate that Ofcom will recommend a combination of human moderation and other systems, depending on what is effective and proportionate for in-scope services.”

The initial draft of the Online Safety Bill was published by the government in May last year; in December, a parliamentary committee released a report calling for the laws to go even further and include additional offences, such as so-called cyber-flashing, as well as extending in scope to cover the potential harmful impacts of algorithms.

Government will respond to the report and its recommendations in the next few weeks.

 

About the author

Sam Trendall is editor of PublicTechnology. He can be reached on sam.trendall@dodsgroup.com.

Share this page

Tags

Categories

CONTRIBUTIONS FROM READERS

Please login to post a comment or register for a free account.

Related Articles

ICO urges Capita customers to ‘check their position’ after 90 organisations report data breaches
31 May 2023

Technology services firm has revealed two data-compromising incidents in recent week

 

‘Extremely concerned and disappointed’ – more councils caught up in Capita breach
24 May 2023

Authorities have complained about the lack of time taken to be notified by IT firm and wrongly being told personal data was not put at risk 

Interview: CDDO chief Lee Devlin on the ‘move from being disruptive to collaborative’
23 May 2023

In the first of a series of exclusive interviews, the head of government’s ‘Digital HQ’ talks to PublicTechnology about the Central Digital and Data Office’s work to unlock £8bn...

How Oxford University is using data to empower underrepresented groups in entrepreneurship
22 May 2023

The question is not whether a diversity of talent exists, but how do we enable all to move forward in industry, according to Leah Thompson from the University of Oxford

Related Sponsored Articles

Proactive defence: A new take on cyber security
16 May 2023

The traditional reactive approach to cybersecurity, which involves responding to attacks after they have occurred, is no longer sufficient. Murielle Gonzalez reports on a webinar looking at...