DCMS agency recommends industry kitemark for AI systems
The Centre for Data Ethics and Innovation has called for a competitive market for assurance providers
Credit: Mike MacKenzie/vpnsrus/CC BY 2.0
The UK should develop a competitive market for organisations to offer a kitemark for artificial intelligence systems, both to help people and organisations trust the technology and for economic benefit, according to a report published by the Centre for Data Ethics and Innovation.
The roadmap document, which aimed to outline what will be required to develop an AI assurance industry, said that third-party auditors could assess, test and verify work by a software developer then provide independent information on its trustworthiness to users. It compared this to work such financial audits, food safety checks and air safety regulations that allow people to feel confident about processes they cannot check themselves.
A business innovation survey carried out by the CDEI, part of the Department for Digital, Culture, Media and Sport, has found that more than a fifth of organisations that plan to introduce AI see regulation and their legal responsibilities as a barrier to its introduction.
- Government hopes transparency standard can build trust in departments’ use of algorithms
- Does data science have a dangerous gender gap?
- Was 2020 a watershed for public sector use of algorithms?
The document said that people will be reluctant to accept AI-based products and services, or share data that is needed to make them work, without trust.
“A similar approach to auditing or 'kitemarking' in other sectors will be needed to enable businesses, consumers and regulators to know whether the AI systems are effective, trustworthy and legal,” wrote Chris Philp, minister for technology and the digital economy, in his foreword. “Building on the UK’s strengths in the professional services and technology sectors, AI assurance will also become a significant economic activity in its own right, with the potential for the UK to be a global leader in a new multibillion-pound industry.”
The CDEI said that regulators play an important role in encouraging responsible innovation. This includes the Financial Conduct Authority’s establishment of open banking rules to support innovative fintech services and the Medicines and Healthcare Products Regulatory Agency’s approach to regulating medical devices that has allowed the development and use of emerging technologies.
Regulators are already working on AI, the document added, with the MHRA updating its regulations to cover software and AI in medical devices, the Information Commissioner’s Office developing an AI auditing framework and, with the Alan Turing Institute, publishing guidance on how AI is used to make decisions.
The CDEI said it will support its parent department in setting up an AI standards hub, convene an AI assurance accreditation forum and partner with organisations including the Centre for Connected and Autonomous Vehicles and the Recruitment and Employment Confederation. It also published an initial version of an AI assurance guide to help professionals in the field.
Share this page
CONTRIBUTIONS FROM READERS
Please login to post a comment or register for a free account.
Scotland’s world-first regime needs to go further, critics have claimed
Only centrally approved third-party applications will be allowed on Whitehall devices – but government remains tight-lipped on what might make the cut or how
Campaigners warn that ‘virtual actions are not adequately addressed’ by existing law or pending legislation
Security minister confirms intelligence agency is investigating the video app