Public-sector AI code of conduct published

Written by Sam Trendall on 21 February 2018 in News

Nesta creates 10-strong list of principles it believes should define how government uses artificial intelligence and algorithms

A newly published code of conduct for public-sector use of artificial intelligence has urged the government to be as open as possible about the way in which algorithms are created and how they inform decision making.

Innovation charity Nesta has published a draft “Code of Standards for Public Sector Algorithmic Decision Making”. The code, which contains 10 core principles, was written by the organisation’s director of government innovation Eddie Copeland.

In a blog post, he says that “considerable amount of work has already been done to encourage or require good practice in the use of data and the analytics techniques applied to it”. Copeland singles out the government’s Data Science Ethical Framework as an example of this work.

But greater efforts are yet needed, he believes, particularly on the part of governments and the wider public sector.

Related content

“After all, an individual can opt-out of using a corporate service whose approach to data they do not trust,” he says. “They do not have that same luxury with services and functions where the state is the monopoly provider.”

Copeland has published a draft code of conduct that he believes brings together the 10 principles that could guide and regulate the public-sector’s use of AI and algorithmic decision-making. The 10 principles are:

1. Every algorithm used by a public-sector organisation should be accompanied with a description of its function, objectives and intended impact, made available to those who use it

2. Public-sector organisations should publish details describing the data on which an algorithm was (or is continuously) trained, and the assumptions used in its creation, together with a risk assessment for mitigating potential biases

3. Algorithms should be categorised on an algorithmic risk scale of 1-5, with 5 referring to those whose impact on an individual could be very high, and 1 being very minor

4. A list of all the inputs used by an algorithm to make a decision should be published

5. Citizens must be informed when their treatment has been informed wholly or in part by an algorithm

6. Every algorithm should have an identical sandbox version for auditors to test the impact of different input conditions

7. When using third parties to create or run algorithms on their behalf, public sector organisations should only procure from organisations able to meet principles 1-6

8. A named member of senior staff (or their job role) should be held formally responsible for any actions taken as a result of an algorithmic decision

9. Public-sector organisations wishing to adopt algorithmic decision making in high-risk areas should sign up to a dedicated insurance scheme that provides compensation to individuals negatively impacted by a mistaken decision made by an algorithm

10. Public-sector organisations should commit to evaluating the impact of the algorithms they use in decision making, and publishing the results

Copeland is seeking feedback on whether the code ought to be edited or added to, and anyone wishing to comment is encouraged to do so via Twitter, or by editing this Google doc.

He writes: “The application of AI that seems likely to cause citizens most concern is where machine learning is used to create algorithms that automate or assist with decision making and assessments by public-sector staff. While some such decisions and assessments are minor in their impact, such as whether to issue a parking fine, others have potentially life-changing consequences, like whether to offer an individual council housing or give them probation. The logic that sits behind those decisions is therefore of serious consequence.”

About the author

Sam Trendall is editor of PublicTechnology

Share this page



Add new comment

Related Articles

How the UK Space Agency is improving public services from the exosphere
28 February 2018

PublicTechnology talks to Sara Huntingdon about the Space for Smarter Government Programme’s work with Whitehall and the wider public sector


Embedding AI and biometrics into services picked as GDS priorities for 2018
12 February 2018

Director general Kevin Cunnington runs through achievements of past 12 months and goals for the coming year, including supporting Brexit initiatives and increasing DDaT skills

The government’s Institute of Coding could help create a truly digital workforce
6 February 2018

James Milligan of Hays Digital Technology welcomes PM Theresa May’s investments in closing the technology skills gap

Bursting the bubble – the ethics of political campaigning in an algorithmic age
24 January 2018

The ICO’s investigation into the use of data and analytics in campaigning is likely to prompt the creation of a new code of conduct for political parties, according to information commissioner...

Related Sponsored Articles

How to quantify cyber risk
15 March 2018

BT's Malcolm Stokes explains how organisations can attribute accurate figures to cyber risks in order to make a viable business case.

Cyber security is one of the greatest man-made challenges of our time
6 March 2018

BT's Ben Azvine argues that the frequency and impact of breaches is increasing and we need to continuously adapt and innovate to stay ahead of the threat environment