Algorithms in decision-making inquiry: Stephanie Mathisen on challenging MPs to investigate accountability

The House of Commons Science and Technology Committee is to investigate the use of algorithms in decision-making, an inquiry chosen as part of the committee’s crowd-sourcing project. Stephanie Mathisen, campaigns and policy officer at Sense about Science, who pitched the idea to the MPs, explains why the field needs more scrutiny.

Algorithms can be used in everything from calculating credit scores to making digital mazes, but the government needs to be scrutinised if it uses them to make decisions – Photo credit: Flickr, x6e38, CC BY 2.0

At the start of the month, in what felt like a pretty brutal battle, I went before the House of Commons Science and Technology Committee and appealed to them to launch an inquiry into algorithms in decision-making.

I’m delighted they have chosen my topic as one of three investigations to come out of the pitches that day, from myself and eight other people all vying for some of the committee’s valuable and limited time.

You should be delighted too, because this is important. Quite rapidly, and with little debate, algorithms have come to replace humans in making decisions that affect many aspects of our lives, and on a scale that is capable of affecting society profoundly.


Related content

DH trials algorithm tool to analyse consultation responses in effort to handle ‘click democracy’
How can government raise its tech game? The digital tools transforming democracy for today and tomorrow


Algorithms are being used in everything from sifting job applications, calculating credit scores and offering loans and mortgages to deciding whether to release prisoners on bail. These are crucial moments, and the impact of those decisions can be enormous.

Of course, manual decision procedures would have existed before the involvement of a computer. What’s different about computer algorithms is the sheer volume and complexity of the data that can be factored in to decisions, and the potential to apply error and discrimination systematically.

Challenging bad decisions

The lack of transparency around algorithms is a serious issue. If we don’t know exactly how they arrive at decisions, how can we judge the quality of those decisions? Or challenge them when we disagree?

Algorithm-supported systems can, and do, make bad decisions with serious consequences. In 2011, Sarah Wysocki, a teacher in Washington, DC, lost her job thanks to an algorithm.

She got glowing reviews from her principal and the parents of the children in her class, but an opaque algorithm determined that she was under-performing, so she was sacked; her school could not readily explain why.

There needs to be a way to feed back into algorithms, to inform them of whether they are making good or bad decisions, so that they can make better ones in future.

And there are other problems. Algorithms are seen to be objective. Being objective is a good thing: surely it is better to decide how best to distribute scant resources, deliver public services, or sentence criminals using a formula, not someone’s potentially biased assessment?

But in reality algorithms are only as unbiased as the people who create them. They contain the values, judgements and opinions of their creators.

Someone decides which data to include (and what the data represent), which to exclude, and how to weight each component.

On that note, there are things that can’t be measured with numbers, or at least not measured well, like how much a teacher engages their students, or helps them with family or personal problems. This can mean using proxies, or that some potentially important factors aren’t considered at all.

“Algorithms might replicate and exacerbate existing biases and discrimination.” 

Furthermore, how ‘good’ an algorithm is depends entirely on its creator’s intended outcome and what success looks like to them, with which you, me or anyone else may not agree.

Algorithms are also only as unbiased as the data they draw on.

Although algorithms could help to reduce unconscious biases, they might also replicate and exacerbate existing biases and discrimination, disguised by a notion of technological neutrality. 

For example, if a university uses a machine learning algorithm to assess applications for admission, it will be trained on historical data containing the biases – conscious or unconscious – of earlier admissions processes. The university may have decided to use an algorithm to eliminate bias, but it could end up entrenching it instead.

There’s also a notion of fairness to consider. There is a huge amount of excitement about big data at the moment, including the potential to create models to predict people’s behaviour.

But those models calculate probabilities, not certainties, that someone might be a bad employee, a risky borrower or a bad teacher. Even if the models are good at coming up with those probabilities, is it fair to treat people a certain way on the basis of a probability?

Scrutiny and standards

The new Government Transformation Strategy revolves around “better use of data”, and the Government Office for Science report on artificial intelligence tells us that the government is keen to increase its use of algorithms.

Parliament and the public need to be able to scrutinise that use. For starters, it should be clear when the government is using algorithms in decision-making.

“If government is using algorithms, it should be setting the right example.”

Because algorithms are opaque there is a lot of scope for hokum, for organisations – including government and its agencies – to be sold dodgy products or services. People need to know how to ask for evidence about algorithms, what questions to ask, and what their expectations and standards should be.

It’s vital for the accountability of government that its decisions are transparent, that people are treated fairly, and that hidden prejudice is avoided. If government is using algorithms, it should be setting the right example.

To that end, it could apply a set of standards. A suggested code of conduct was published in November last year, including five principles of good algorithms: responsibility, explainability, accuracy, auditability and fairness.

There might need to be an ombudsman or third party regulator for people affected by algorithmic decisions to go to.

The committee’s inquiry is timely. The new EU General Data Protection Regulation is set to be adopted by Britain and EU member states in 2018. This legislation will govern how artificial intelligence can be challenged, and early drafts have included a “right to explanation”. This is something that should be guaranteed.

The issues with algorithms in decision-making aren’t future problems; they already exist. It is essential – for government, parliament and all of us as citizens – that we understand how decisions about our lives are being made.

Rebecca.Hill

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *