‘Reset’ AI Security Institute agrees MoU with US start-up and cuts bias and free-speech remit


Former AI Safety Institute will undergo major changes, while government more widely will work with Silicon Valley entity Anthropic, an AI and research firm and developer of virtual assistant Claude

A revamp of the former AI Safety Institute will bring with it a new name and new partnership with a US-based start-up, as well as a reduction of its remit to drop its work related to bias and free speech.

The organisation, which is based in the Department for Science, Innovation and Technology, will now be known as the AI Security Institute – retaining, at least, the ‘AISI’ initialism by which it is known.

According to DSIT:  “This new name will reflect its focus on serious AI risks with security implications, such as how the technology can be used to develop chemical and biological weapons, how it can be used to carry out cyberattacks, and enable crimes such as fraud and child sexual abuse.”

As part of its updated duties, the institute will work more closely with other parts of government, including engaging the Ministry of Defence’s innovation research unit the Defence Science and Technology Laboratory. This joint work will focus on considering the “the risks posed by frontier AI”, according to DSIT.

A newly created “criminal misuse team” in AISI will also collaborate with the Home Office “to conduct research on a range of crime and security issues which threaten to harm British citizens”.

No longer part of the institute’s areas of focus will be the potential impact of AI on bias and freedom of speech.

DSIT secretary of state Peter Kyle said: “The changes I’m announcing today represent the logical next step in how we approach responsible AI development – helping us to unleash AI and grow the economy as part of our Plan for Change. The work of the AI Security Institute won’t change, but this renewed focus will ensure our citizens – and those of our allies – are protected from those who would look to use AI against our institutions, democratic values, and way of life. The main job of any government is ensuring its citizens are safe and protected, and I’m confident the expertise our Institute will be able to bring to bear will ensure the UK is in a stronger position than ever to tackle the threat of those who would look to use this technology against us.”


Related content


Alongside the revamped AISI is a new memorandum of understanding agreement between the UK government and Anthropic – an AI and research outfit established in San Francisco in 2021 as a public benefit corporation. The firm created AI assistant Claude.

The company will work with government’s newly created Sovereign AI Unit, the creation of which was set out in the recent response to the AI Opportunities Action Plan. That document recommended that Whitehall establish a new body to work directly with AI firms – both via partnerships and via investments.

Engagement between the unit and Anthropic “will include sharing insights on how AI can transform public services and improve the lives of citizens, as well as using this transformative technology to drive new scientific breakthroughs”.

Dario Amodei, chief executive of Anthropic, said: “AI has the potential to transform how governments serve their citizens. We look forward to exploring how Anthropic’s AI assistant Claude could help UK government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to UK residents. We will continue to work closely with the UK AI Security Institute to research and evaluate AI capabilities in order to ensure secure deployment.”

The government currently works OpenAI – the creator of ChatGPT – to support the creation of the government’s new chatbot tool: GOV.UK Chat. The technology is currently in a beta phase where citizens can use the automated tool to seek answers to business-related questions.

Following on from the Anthropic partnership, DSIT indicated that government will “look to secure further agreements with leading AI companies”.

Sam Trendall

Learn More →