Government agency looks to improve deepfake detection

Defence research unit plans project to help intelligence services use AI to ‘check and validate content at scale’ 

Credit: Openclipart/Pixabay

A government agency is to undertake a £300,000 project intended to help improve detection of deepfake videos and images.

The Defence Science and Technology Laboratory is looking to work with a specialist supplier on a programme of work to create “evaluation data sets” that can be used to test the effectiveness of artificial intelligence in detecting deepfake imagery, as well as performing other forms of “media authentication”.

The laboratory – an executive agency of the Ministry of Defence dedicated to military innovation research – wishes to explore use of AI in identifying disinformation spread online by hostile states or other actors. This includes a range of falsified content, including images and written posts.

Such activity, which falls short of outright aggression, is referred to “sub-threshold”.

Dstl said that seeking out deliberately spread disinformation on social media and online news outlets is an increasingly important part of the UK’s intelligence, surveillance, and reconnaissance operations. 

In a newly published contract notice, the unit said that it believes AI has the potential to enable intelligence and security analysts to “check and validate content at scale” across the “sub-threshold information environment”.


Related content


The agency wishes to explore this potential by testing current AI systems performance on specially representative data sets of images and written pieces, including content that has been created using the “anti-forensic techniques” employed by those who deliberately spread disinformation.

“To ensure that such AI techniques are trustworthy, we need to evaluate their performance using high-quality, unseen validation datasets,” the notice said. “Media authentication methods, including deepfake detection, often suffer from poor cross-data set generalisation. They can appear to perform well on the standard data sets on which they are trained, but then lose effectiveness when applied to ‘in the wild’ data. Before we can give any media authentication tools to analysts, we need to ensure that they will generalise well. To do this, we need large, bespoke data sets created using a variety of media synthesis methods including deepfakes, diffusion models, and text generation.”

Dstl is seeking to work with a specialist supplier over the course of a three- to four-month contract worth up £350,000. The chosen firm will be tasked with creating and collating material with which to test media authentication AI systems.

These should constitute “labelled, well-structured data sets of real and falsified media, across multiple modalities [and] should include deepfakes, GAN-generated imagery (generative adversarial networks), diffusion model outputs, image splicing, generated text, generated audio, and image-caption pairs”.

Bids for the contract are open until midnight on 17 November, with work scheduled to commence in the opening days of 2023.

Artificial intelligence is an increasingly important area of work for Dstl. Across a four-year period to the end of the 2024/25 year, the agency has set aside £100m set aside to support research of the potential use of the technology in military and defence environments.

 

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter
ErrorHere