AI Week: A bias view

The potential for technology to embed and amplify systemic biases is seen as one of the biggest inherent risks of deploying AI and automation at scale. PublicTechnology talks to experts about the key issues and how they can be addressed

Credit: Mohamed Hassan from Pixabay 

The risk that artificial intelligence and automation will not eradicate human bias but will, rather, embed and industrialise it, is seen by many as the biggest danger in using the technology in the delivery of public services. 

Such is the concern around the issue that, alongside targeting, bias is one the two areas of initial focus chosen by the government’s recently established Centre for Data Ethics and Innovation. The CDEI, which is housed within the Department for Digital, Culture, Media, is working with government’s Race Disparity Unit on a review of the extent to which algorithms could reflect and automate human biases, as well as the potential impact of this, and how it can be combated.

In its initial interim report on its work so far, which was published in July, the CDEI said: “Algorithms can be supportive of good decision-making, reduce human error and combat existing systemic biases. But issues can arise if, instead, algorithms begin to reinforce problematic biases, for example because of errors in design or because of biases in the underlying data sets. When these algorithms are then used to support important decisions about people’s lives, for example determining whether they are invited to a job interview, they have the potential to cause serious harm.”

The ongoing review ultimately seeks to answer three overarching questions: whether there is sufficient access to high-enough-quality data to mitigate bias; what technological and statistical products and services are currently available that could help deal with the issue; and who should be responsible for managing, securing, and auditing the performance of decision-making algorithms.

The CDEI review will be examining the issue of algorithmic bias in the local government, financial services, and recruitment sectors, but it has begun by looking at policing.

The use of AI and analytics by the police, and the potential harm that would be caused by bias in that context, has already proven highly controversial.

The National Data Analytics Solution programme run by the West Midlands Police aims to use data and analytics to gain insights in three areas related to crime and internal law-enforcement processes. 

The project aims to help police get better at recognising victims of modern slavery, as well as predicting when staff are in danger of suffering a stress-related illness. 

But its most contentious programme of work aims, ultimately, to help officers assess young people’s risk of committing or falling victim to gun or knife crime.


Policing, financial services, recruitment, and local government
Areas where CDEI and government’s Race Disparity Unit will be reviewing the potential impact of bias
 

Six
Number of commonly defined dimensions of data quality: completeness; timeliness; consistency; integrity; conformity; and accuracy
 

£9.5m
Money invested to date by the Home Office in the West Midlands Police National Data Analytics Project
 

4.8
Based on about 23,200 users, current average rating of the GP at Hand app on Apple’s App Store


Much of the coverage of the programme – which this summer received an additional £5m of central-government funding to continue its work – has made unfavourable comparisons between the project and various dystopian movies.

Speaking to PublicTechnology earlier this year, Iain Donnelly, who was then superintendent of West Midlands Police, acknowledged people’s discomfort with the use of analytics.

“Any use of predictive analytics in the UK will never be about looking at the general population, or looking at people going about their daily business,” he said. “This will always be about helping us make sense of increasing volumes of data – that we are already lawfully in possession of – in a way that helps us make better decisions.”

Donnelly added that the risk of embedded bias was one he and his team were “very mindful of”.

“We are aware that, when we do something around predictive analytics in this space, there is a strong possibility that there will be an over-representation of individuals from BME communities being identified,” he said. “To be as transparent as we possibly can, at the technical level of data, we are not including ethnicity in the data. We are excluding anything to do with ethnicity in the analytics. We are even considering stripping out anything to do with geography, to try and stop the possibility that we are reinforcing a geographical bias.”

A healthy outcome?
More recently, bias concerns were raised about the chatbot that forms part of the NHS-endorsed GP at Hand app. The remote GP app, created by Babylon Health, now serves as the NHS GP for more than 50,000 UK citizens – including health secretary Matt Hancock.

An anonymous NHS consultant, who goes online by Dr Murphy, recently posted a video demonstrating that the Babylon chatbot offered very different diagnoses for symptoms and circumstances that were identical save for one thing: the gender of the supposed patient.

For a male 59-year-old smoker presenting sudden-onset chest pain and a feeling of nausea, the program concluded that the cause could be one of several very serious cardiological conditions – including a possible heart attack.

“This is such a new area – there is always an opportunity to improve things. Our product is really good – and safe – but there is always the opportunity to make it a bit better.”
Dr Keith Grimes, Babylon Health

For a female with exactly the same circumstances and reported symptoms, the bot presented two possible causes: depression; or a panic attack.

Many observers online concluded that Babylon’s product reflected a clear gender bias – one that could threaten people’s lives. 

Dr Keith Grimes, who serves as Babylon Health’s clinical artificial intelligence and innovation director and also still works as an NHS GP one day a week, told PublicTechnology last month that, although the product – which he stressed is an optional tool, and does not need to be used before patients book a GP consultation – was functioning correctly, the company would study the case to see what could be learned.

“Our app was working as intended at the time – it was providing information and a triage outcome,” Grimes said. “Clearly there are going to be differences in cases and in symptoms between men and women – they are biologically very different.”

He added: “The cases presented on Twitter were a snapshot of a final outcome. We have reviewed this since then, and we are confident that the medical evidence supports the outcomes. All the same, there are long-standing concerns about systematic bias in medical research and literature – either conscious or unconscious. We scour [our services] to make sure that it does not show any signs of that.
 
“We were very, very careful not only in public testing, but also in being very aware of any feedback going forward. We are still trying to understand what might lead it to behave in this way. This is such a new area – there is always an opportunity to improve things. Our product is really good – and safe – but there is always the opportunity to make it a bit better. We can find out what happened here, and we can improve our processes.”

Data quality
Eleonora Harwich, director of research at think tank Reform, says that bias can germinate in several ways.

Data quality is a significant cause, she says. Anyone using health data should assess it against the common definition of the six dimensions of data quality: completeness; timeliness; consistency; integrity; conformity; and accuracy.

But even that is no guarantee of eradicating bias.

“Are you going to take the view that the data you are using is objective? Obviously, there are laws of science and physics, but I do think that the ways we go about collecting data are a social construct – and there are many different ways of measuring its purity, Harwich says. “If your data is from [NHS] trust A – which does not have a representative population – and you’re then using that data to train the model which is then used by trust B or trust C – it will not work.”

While acknowledging that bias presents a “big problem” for public sector use of AI, Professor Helen Margetts (pictured right) director of the Alan Turing Institute’s Public Policy Programme, also believes that increased access to data presents a chance to combat it.

“First of all, we’re seeing this bias as explicit for the first time, because we have the data. We didn’t used to have the data, so we couldn’t tell you whether people were disproportionately sentenced or suspected because of their race, or hired because of their gender, for example,” she says. “The other thing is: we might be able to do something about it… [before], we didn’t know so much about the decision-making process, and we couldn’t delve in or tweak it. But, now, we might be able to.”

 

 

 

 

 

 

 

This article forms part of PublicTechnology’s dedicated AI Week, in association with UiPath. Look out over the coming days for lots more content. 

 

 

 

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter
ErrorHere