AI Week: can we forgive a robot and three other important questions

To introduce AI Week, PublicTechnology editor Sam Trendall runs through four of the biggest questions facing the technology

 

Credit: Véronique Debord-Lazaro/CC BY-SA 2.0

Hello, and welcome to AI Week on PublicTechnology.

Over the course of the next five days, we will be bringing you a wide range of content dedicated to the technology that has surely more potential than any other to transform government and public services.

Today we will be making an introduction to artificial intelligence, looking at the journey the public sector has so far taken with the technology, and where it has led. Tomorrow we will profile some existing use cases, then later in the week we will move on to looking at the ethical, legal, and technical challenges, the respective roles of the various stakeholders and, finally, we will examine what the future may hold.

AI Week – which is being run by PublicTechnology in association with UiPath – will bring our readers an array of features, interviews, analysis and case studies. From Wednesday, you will also be able to view an exclusive webinar discussion in which an expert panel of public- and private-sector representatives will debate all the major issues. Click here to register to view on demand – free of charge.

In my time reporting on public sector digital and data, no technology has presented government with as many possibilities as artificial intelligence – nor posed it as many difficult questions.

Let’s begin by looking at four of the most pressing that are yet to be fully answered.
 

Does it work?
This may seem like an obvious – and slightly glib – place to start, but it remains an important question. There are plenty who would argue that AI does not yet work well enough to be deployed in the delivery of public services.

Police trials of automated facial-recognition (AFR) software have been among the most high-profile uses of AI in a public-sector environment to date. They have also been, perhaps, the most controversial.

The trials have raised concerns among various civil-society groups, including human-rights organisation Liberty, which has supported a court case brought by Cardiff man Ed Bridges questioning the lawfulness of South Wales Police’s use of AFR. High Court judges recently ruled that the technology is lawful, but Bridges and Liberty have indicated that they will appeal the decision.

Privacy advocacy organisation Big Brother Watch has also been vocal in raising concerns about facial recognition. The group published a report last year claiming that the identifications made by the technology are up to 95% inaccurate.

Which is approximately 95% less accurate than it needs to be before many people will accept AI as a part of life.
 

Will it eliminate jobs?
Even if AI does work, many remain concerned that it might work a little too well. The question of the extent to which automation will cause job losses remains an important one. And with good cause: a report earlier this year from the Office for National Statistics predicted that 1.5 million workers across the UK are at risk of losing their job as a result of tasks being automated.

And they will not be the first to do so.

The ONS also reported that, between 2011 and 2017, a quarter of jobs as cashiers or checkout assistants have been eliminated – in large due to the increased use of automated checkouts.

If this trend continues and expands, government will need to consider what it can do to help create new jobs and support those whose positions are most at risk.

Can data ever be unbiased?
The issue of how best to recognise and remove bias embedded in data is one of the biggest problems AI needs to solve before the technology can responsibly be implemented by those delivering public services.

But there are some that would question whether data can ever be truly unbiased, and that information will always bear the hallmarks of the humans who collected or collated it, and the systems and structures to which it pertains.

If the humans, or the system in which they operated, were biased, then the data in question will necessarily reflect that, some would argue.

If this is the case, you cannot disentangle the data from the bias it reflects any more than you can travel back in time and mend the many ways in which society was – and still is – racist, sexist, or otherwise discriminatory. 

Of course, you can try and ensure that any new data collection exercises are as aware as possible of potential bias and take steps to avoid it. What you cannot do quite so easily is build a base of years’, or even decades’ worth of comprehensive and bias-free data on which to properly train your algorithms.
 

Can we forgive a robot?
The promise of AI is that it could remove the possibility of human error.

What it cannot eradicate, however, is computer malfunction; all technologies go wrong at some point. If, in the case of AI and automation, it does so while performing a back-office administrative task, then the fallout will be minimal.

But, if and when AI is deployed in delivering front-line citizen services, we may have to grapple with some pretty complex philosophical questions.

Let’s say, for instance, that one day in the near future an algorithm could be proven to be 99.9% accurate in detecting and diagnosing cancer – more effective than any human oncologist. At an intellectual level, it’s very easy to make the case that deploying technology in that theoretical use case would save lives. 

But humans don’t always – in fact, don’t often – react to things at the intellectual level. If a loved one of yours was among the 0.1% who were misdiagnosed, there would be no bigger picture. 
When humans err, we can understand and contextualise it and, hopefully, reach a point of absolution.

But can we truly forgive a machine? 

We will likely have the chance to find out the answer to this question in the coming years. But, in the meantime, even the most sophisticated algorithm would struggle to predict the outcome.

 

This article forms part of PublicTechnology’s dedicated AI Week, in association with UiPath. Look out over the coming days for lots more content – including an exclusive webinar in which experts from the public and private sector will discuss all the major issues.

Tomorrow, we will bring you case studies of how two of the public sector’s biggest organisations – the Department for Work and Pensions and HM Revenue and Customs are using artificial intelligence in their operations. On Wednesday, an exclusive webinar discussion – in which a panel of private and public sector experts will debate all the major issues related to government’s use of AI – will be available to view on demand. Click here to register to do so – free of charge.

 

 

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter
ErrorHere