Comprehensive coverage vs dimensions of uncertainty – the complicated world of ‘AI-enriched intelligence’


Artificial intelligence offers unprecedented opportunities for security professionals to analyse material, but such potential is accompanied by major challenges and unknowns, according to a report from the Alan Turing Institute

In the FAQ section of the website of the Security Service – otherwise known as MI5 – the agency provides some insight into the thinking behind the commonly espoused central tenet of the intelligence community: the NCND principle.

The principle – which refers to the idea that agents should ‘neither confirm nor deny’ anything – is “used to protect sensitive information and to prevent the damage to national security that would otherwise result from its disclosure”, the website says.

“Information about who or what we are investigating, and the tools and techniques we use to carry out our investigations, would be useful to the UK’s adversaries so we would not confirm details about our operations,” it adds. “Information about who works, or has worked, for MI5, or the identities of current and former covert human intelligence sources, also known as agents, would also be useful to those looking to do the country harm. It could also put the officers and agents concerned and their loved ones at personal risk, possibly endangering their lives.”

In the course of the answer, the agency does, however, confirm one thing: that it now operates an Instagram account.

The opaque nature of many AI systems makes it difficult to understand how AI-derived conclusions have been reached.

Turing Institute report

This, perhaps, speaks to the juxtaposed world in which secret services now have to operate. An increasingly digital domain in which very little information – including mis- and disinformation – is kept secret.

This inherent friction seems likely to chafe a little harder still as we progress further into the age of automation and artificial intelligence.

But, then again: “Advances in AI bring new opportunities and hold exciting potential for both intelligence production and assessment, helping to surface new intelligence insights and boosting productivity,” according to the opening words of a recent report from the Alan Turing Institute.

The foreword adds: “AI is not new to GCHQ or the intelligence assessment community. But the accelerating pace of change is. In an increasingly contested and volatile world, we need to continue to exploit AI to identify threats and emerging risks, alongside our important contribution to ensuring AI safety and security.”

But the document’s opening words, jointly attributed to GCHQ director Anne Keast-Butler (pictured below) and Joint Intelligence Committee chair Madeleine Alessandri, also reflect the potential difficulties of new technology.

“Advances in AI bring some new challenges for intelligence production and assessment,” they write. “Questions of bias, robustness, and source validation apply just as much to AI systems as they do to the more traditional sources of insight.”

‘For beyond the capacity of human analysts’
The report, which is based on research conducted by the Turing’s Centre for Emerging Technology and Security (CETaS), identifies seven key findings.

The first of these is that, whatever the surrounding challenges, “AI is a valuable analytical tool for all-source intelligence analysts”.

Credit: GCHQ

Such value is derived from the fact that technology “can process volumes of data far beyond the capacity of human analysts, identifying trends and anomalies that may otherwise go unnoticed”.

Indeed, if the intelligence services do not use AI, this creates a risk of “contravening the principle of comprehensive coverage in intelligence assessment”. Formal standards characterise this principle as a tenet that “assessments should be based on all sources of available and relevant information”.

“If key patterns and connections are missed, the failure to adopt AI tools could undermine the authority and value of all-source intelligence assessments to government,” the report says.

Although this means the deployment of machine-learning tools may become necessary, “the use of AI exacerbates dimensions of uncertainty  inherent in intelligence assessment and decision-making processes”, according to the report’s second key finding.

“The outputs of AI systems are probabilistic calculations – not certainties – and are currently prone to inaccuracies when presented with incomplete or skewed data,” it adds. “The opaque nature of many AI systems also makes it difficult to understand how AI-derived conclusions have been reached.”

Given the possibility of such inaccuracies, there is a clear need for “careful design, continuous monitoring, and regular adjustment of AI systems” deployed in intelligence environments the report notes, in the third of its key findings.

The next of CETaS’s conclusions is that responsibility for technical evaluations of the efficacy of technology should ultimately be held by the intelligence body using the technology.

“Intelligence analysts must take into account any limitations and uncertainties when producing their conclusions and judgements,” the report adds.

The fifth of the major findings outlined in the study is that, in order to make decisions informed by AI-influenced intelligence, leaders “currently require a high level of assurance relating to AI system performance and security”. The sixth, meanwhile, is that – in the absence of this kind of assurance – decision-makers currently have “greater confidence in the ability of AI to identify events and occurrences than the ability of AI to determine causality”.

If key patterns and connections are missed, the failure to adopt AI tools could undermine the authority and value of all-source intelligence assessments to government.

Turing Institute report

“Decision-makers were more prepared to trust AI-enriched intelligence insights when they were corroborated by non-AI, interpretable intelligence sources,” CETaS adds.

And, according to the final of the report’s core conclusions, there are still significant numbers of intelligence chiefs that lack sufficient expertise in the technology, as “technical knowledge of AI systems varied greatly among decision-makers”.

The report adds: “Research participants repeatedly suggested that a baseline understanding of the fundamentals of AI, current capabilities, and corresponding assurance processes, would be necessary for decision-makers to make load-bearing decisions based on AI-enriched intelligence.”

A layered approach
In response to its core findings, the Turing study set out six recommendations that the AI institute believes would help “embed best practice when communicating AI-enriched intelligence to strategic decision-makers”.

The first of these is that the head of the intelligence analysis profession “should develop guidance for communicating uncertainty within AI-enriched intelligence in all-source assessment”.

Frontline analysts communicating AI-based intel to senior decision-makers, meanwhile, should take a “layered approach… [in which] assessments in a final intelligence product presented to decision-makers should always remain interpretable to non-technical audiences” – but extra technical detail should be made available, on request, for those with the expertise to grasp it.

Centralised education unit the Intelligence Assessment Academy “should complete a training needs analysis” to ascertain what training is required for both existing analysts and those joining the profession in the future.

As a priority, extra education “should be offered to national security decision-makers (and their staff) to build their trust in assessments informed by AI-enriched intelligence”. Should training should include “basic briefings on the fundamentals of AI and corresponding assurance processes”, the report recommends.

Immediately before participating in “high-stakes national security decision-making sessions” in which AI-informed intel is likely to “underpin load-bearing decisions”, leaders should be offered “short, optional expert briefings”.

“These sessions should brief decision-makers on key technical details and limitations, and ensure they are given advanced opportunity to consider confidence ratings,” the recommendations say.

The report’s final recommended measure is that “a formal accreditation programme should be” created to measure AI models deployed in analysing intelligence. The certification framework should aim “to ensure models meet minimum policy requirements of robustness, security, transparency, and [provide] a record of inherent bias and mitigation”. The assurance models used by developers of AI systems should also be scrutinised by such a programme, the report recommends.

Dr Alexander Babuta, director of CETAS, concludes that: “Our research has found that AI is a critical tool for the intelligence analysis and assessment community. But it also introduces new dimensions of uncertainty, which must be effectively communicated to those making high-stakes decisions based on AI-enriched insights.”

In short, AI seems likely to have as profound an impact on the intelligence services as it is in any other sector. But, for the time being, security chiefs can neither confirm nor deny exactly what that impact might be.

Hear more from the Alan Turing Institute at our PublicTechnology Live conference, taking place in London on 21 May. Dr Jonathan Bright leads the institute’s work on the researching the use of AI in public services, and will be appearing as part of the event’s closing discussion asking: should the public sector believe the AI hype?

The event, which is completely free to attend for public sector employees, also features an opening panel with five permanent secretaries discussing digital transformation, as well as presentations and interactive sessions featuring leaders from the likes of the Scottish Government, London Borough of Redbridge, and the Central Digital and Data Office. Find out more or register here, or click on the image above.

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter
ErrorHere