Online safety: How police, public sector and tech firms have reached a data-sharing stalemate

With the Online Safety Bill now published, former police superintendent Iain Donnelly writes for PublicTechnology on the challenges that need to be overcome in order to ensure the law’s efficacy

Credit: Bubba73/CC BY-SA 3.0

This week, the long-awaited Online Safety Bill was announced in the Queen’s speech. The draft bill was also published, clearly demonstrating to social media companies and online platforms that the UK government has finally run out of patience. The government has sent out a clear message to the world that we will no longer tolerate the online sexual grooming of our children, revenge pornography or hate speech. They’ve made it clear that it’s also time to clamp down much harder on vicious cyberbullying, child abuse imagery, the encouragement of suicide and self-harm, online radicalisation and fake news.

The bill will also mandate a more rigorous approach to online age verification to prevent children from accessing unsuitable websites and hardcore pornography.

Inevitably there are complaints from those who don’t want to see the government interfering in internet freedoms and alarmist warnings of online censorship.

All that I would say to that is: tell that to the parents of a child who took their own life after months of online bullying, or a teenage boy sexually abused by an older man who threatened to send explicit images of him to all of his Instagram contacts.

From what I’ve read so far, I’m reassured that there will be sufficient safeguards in place to protect political and journalistic freedoms, free speech and whistleblowers. We just now need to get on and do this for the 95% of good reasons, rather than the 5% of reasons to prevent it from happening.

Ofcom were in the business of law enforcement, they are just about to transfer from walking the beat in a medium-sized Cotswold town to starting a new job as the Sheriff of Dodge City

Speaking as someone who spent a long career in law enforcement, seeing at first hand the significant human cost of online harms, I sincerely applaud Oliver Dowden the Secretary of State for Digital, Culture, Media and Sport for his commitment and leadership in this area of public life, and his staff at DCMS who are driving this work.

The UK government is leading the world in the battle to clean up the darkest corners of the internet, and regardless of your political leanings, this is something that every responsible person, and parents in particular, should feel very relieved about. Not only that, but UK technology companies are leading the world in developing solutions to identify and remove toxic content. There are currently around 80 companies in the UK focused on;

Identifying and removing known illegal content; for example child sexual exploitation or terrorist activity
Detecting potentially harmful content or behaviour and flagging to human moderators for action
Assuring the age of users and ensuring that phones, tablets and the apps on them are safe and appropriate for children
Filtering content perceived to be harmful across school, business or home internet systems
Identifying and mitigating disinformation
 
Having left policing in 2019, I’m now working as an advisor to technology companies in the public safety and crime investigation space. I have a slightly unusual skillset for an ex-law enforcement professional working in technology. I was previously the project manager for the National Data Analytics Solution, which was the first serious attempt by law enforcement to use artificial intelligence to gain a better understanding of serious crime, and I have also been a serious crime investigator, a senior investigating officer and a senior child abuse and safeguarding manager. I therefore have a very good understanding of the legal, ethical, practical and technical challenges lying ahead.
 
In this article, I thought that it might be useful to describe some of the current and future challenges for safety tech companies who are navigating a highly complex regulatory landscape where there is literally no historical precedent or learning from other countries.
 
Key challenges
Ofcom have many years of experience and credibility balancing the legitimate commercial and public service needs of broadcasters and the media with the equally important need to protect children and the wider community from misbehaviour and unacceptable content. This undoubtedly qualifies them to act as the most appropriate referees and enforcers in this space.
 
However, the challenges for Ofcom are formidably complex:
 
Multiple platforms using multiple technologies, hosted in multiple international jurisdictions with content dynamically changing from moment to moment. 
Platforms hosting content in countless different languages, and the moderation efforts by each ranging from nil through to reasonably mature. 
Content that is difficult or impossible for the average person to even understand. I remember well how we desperately tried to stop young gang members from killing each other after they uploaded drill music videos to YouTube, shot that day on the streets of Birmingham, glorifying the latest shooting or stabbing and goading rival gangs in language that was almost impossible to interpret. The comments section would reinforce these threats using a mix of street language and emojis that clearly meant something to them, but very little to us. 
It’s not just social media platforms; it’s online gaming, news groups, dating apps, review sites and online forums discussing everything from growing tulips to trainspotting, where middle-aged men now exchange insults over the relative merits of the Flying Scotsman over the Mallard steam train.
 
It would therefore be reasonable to use the analogy that if Ofcom were in the business of law enforcement, they are just about to transfer from walking the beat in a medium-sized Cotswold town, to starting a new job as the Sheriff of Dodge City. I genuinely wish them well and in the same way that policing has to try and prioritise issues of the greatest harm, they too will need to do exactly that.
 
However, in this article I want to focus on something that remains one of the single biggest barriers in our ability to help innovators develop the technologies that will be necessary to clean up the internet. It is also the single biggest barrier to innovation in large parts of the UK public sector. 
 

Data sharing, or rather the lack of it, is seriously hampering the ability of technology companies to develop effective solutions to combat internet harms. However, before I describe that particular headache, it’s important to set out the unique context and the interdependencies at play here.
 
In all of this, we have governments starting to say, ‘we will not permit you to display content that is harmful to our citizens, and we have an enforcement arm who will fine or block non-compliant tech platforms.’  However, at the same time,  those governments have no ability to mandate how the platforms moderate content, and all too little understanding of the technical challenges in doing so. The platforms themselves operate in a global market, (not just a UK market) and their business model has historically placed content moderation fairly low on their list of priorities, regardless of what people like Nick Clegg might say.
 
The final players in this ecosystem are privately funded safety tech companies who have shrewdly – and correctly – predicted that time is running out for the big tech players. They’ve been developing cutting-edge solutions to combat online toxicity and some of them are streets ahead of Silicon Valley, who’ve been dragging their feet like sulky teenagers forced to visit their elderly aunt.
 
So why is data sharing such a big issue? Quite simply it’s because the solution to making the internet a safer place for everyone is to harness the latest advancements in artificial intelligence and machine learning.
 
Online harms are now so widespread, and the volumes of data are so vast, that human moderators simply cannot keep on top of the problem. Facebook alone employs an army of tens of thousands of moderators to try and monitor hundreds of millions of posts a day. There have been recent disclosures of dreadful working conditions, high levels of work-related stress and employees required to sign NDAs to prevent them speaking out.
 
One of my clients is Samurai Labs, who have developed an artificial intelligence solution that can be simultaneously deployed across multiple platforms and millions of online interactions to autonomously identify, challenge or remove toxic behaviour with a very high degree of accuracy. Their solution has been independently benchmarked against the best tools currently in use by the Silicon Valley giants and it’s more than twice as accurate, offering a step change in internet safety. But here’s the rub. Samurai Labs and the other safety tech companies working in this space need machine learning training data to develop and improve their technology.
 
Building every new use case requires a new set of training data, and generally speaking this data can only be obtained from one of two sources. The data is either held by the tech giants themselves, who have no interest in helping others moderate their systems, or it’s held by public sector organisations who historically have always been terrible at sharing data with anyone, even with other public sector organisations doing exactly the same work. 
 
Challenges and opportunities
My own experience in policing was that it’s almost impossible to get police forces to share data sets with each other, even when they’re ‘partnered’ on the same analytics project. Everyone is terrified of being fined by the Information Commissioner’s Office and data protection professionals will give you any answer you want – as long as it’s ‘no’.
 
Central government and the ICO need to encourage a presumption that anonymised data must be shared with tech innovators working in this space, unless there are very strong reasons not to do so. Call it ‘enlightened self-interest’.
 
Anecdotally, I hear that one of the real reasons why public bodies refuse to share data is that they are actually embarrassed by terrible data quality, and they simply use GDPR as an excuse. I doubt that it’s any different in other parts of the UK public sector, meaning that innovation in all aspects of public life will suffer, and the often-repeated mantra that public services need to be more ‘data-driven’ will remain words only, rather than become a reality.
 
In the case of tech innovators, there is a need to try and identify and stop the sexual grooming of children by paedophiles online, and the only way that they can do this effectively is to train the systems using historic child-grooming chat logs seized in police investigations. The owners of that data are the police and thus I refer you to my previous point. Stalemate.
 
So what needs to happen?
 
There is currently lots of great work going on to identify challenges and opportunities around data sharing, all of which has been commissioned by the Department of Digital, Culture, Media and Sport. This work needs to be fast-tracked, and importantly the learning from it should inform a joined-up response across the UK public sector.
 
Central government and the ICO need to encourage a presumption that anonymised data must be shared with tech innovators working in this space, unless there are very strong reasons not to do so. Call it ‘enlightened self-interest’.
 
The Data Standards Authority have now published national metadata standards that will apply to all data shared between public bodies. Whilst this might be seen by some as a further barrier to data sharing, it’s actually an enabler, providing a robust ‘chain of evidence’ to show where data came from, who owns it and the purpose for which it is being shared. Public bodies need to embrace these new standards to provide transparency, accountability and to enable technology providers to gain new and useful insights using machine learning.
 
The ICO have created a regulatory sandbox to work with tech innovators and create robust and compliant systems by design. This is a great start, however realistically they will have limited capacity to support innovation. Thus the learning from each cohort needs to be made available to anyone working in this space.
 

Finally, UK safety tech companies need to collaborate and actively work together rather than against one another to achieve common minimum standards, consistency of output and remove barriers to technical deployment. They also need to build active relationships with expert bodies such as the National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online (REPHRAIN) and the Alan Turing Institute to ensure that commercial solutions are technically robust, ethical and working for the social good.
 
The battle to clean up the internet will be long, difficult and full of setbacks. The UK government, and DCMS in particular, is to be commended for grasping a particularly nasty, painful nettle, but every parent in the UK should be raising a glass to them and wishing them success. However, they need to start removing unnecessary barriers in this battle and personally I would start with the huge headache that data sharing currently represents.
 
 

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter
ErrorHere