AI fought the law?

The relationship between artificial intelligence and the law is receiving ever greater focus – while somehow becoming less clear. PublicTechnology looks at the role that regulators and lawmakers will play in the coming years

Credit: Pixabay

“I think the Law Society report makes it quite clear that the very scattered attempts to deploy AI systems and facial recognition by the police have not been lawful,” said professor Andrew Howes, at a recent roundtable convened by the Committee on Standards in Public Life. “We may there see a microcosm, a little example of what is going to happen in the future if the existing regulation is not adhered to and if there is not additional recognition to make sure that these kinds of systems are deployed in a way that is consistent with the public interest.”

Howes, head of the University of Birmingham’s School of Computer Science , added: “As I understand it, there is a bit of a gap here in terms of the regulation, in that the use of facial recognition technologies does not come under the auspices of any existing regulatory authority, such as the existing commissions for the use of the data. That might be something that needs addressing.”

He was wrong in his first assertion – at least according to two High Court judges, who last month ruled that trials of automated facial recognition conducted by South Wales Police over the last couple of years have been lawful. The court’s ruling followed a legal challenge brought by Cardiff man Ed Bridges and human rights group Liberty – who have indicated that they intend to appeal the decision.

But, even if the lawfulness of AFR is, ultimately, settled beyond dispute, professor Howes is almost certainly right about one thing: the argument is likely to be a little taste of what is to come in the coming years.


50
Number of separate deployments of AFR examined by the High Court
 

850
Number of arrests South Wales Police claims to have made to date using AFR
 

76% 
Accuracy rate of the technology, according to the force
 

24 hours
Length of time that data on potentially matched faces is kept on AFR systems – this will extend to 31 days if a match with an individual on a watchlist is confirmed
 

Three
Grounds on which Ed Bridges and Liberty challenged the legality of the police’s use of AFR, encompassing data-protection, human rights, and equalities laws


The importance of the question of how AI interacts with law and regulation – and the lack of clarity there is currently – is evidenced by the committee’s interest in the issue. With a membership made up of heavyweights of politics, public service, industry, academia and civil society, the committee’s job is to “advise the prime minister on ethical standards across the whole of public life in England”.

The committee’s review of how AI will impact public standards began in March. It seeks to ascertain “whether the existing frameworks and regulations are sufficient to ensure that high standards of conduct are upheld as technologically assisted decision-making is adopted more widely across the public sector”.

Another roundtable hosted by the committee during its evidence-gathering examined which – if any – existing regulations or legal frameworks address the issue of how the law should treat algorithmic decisions.

Alexander Babuta, a research fellow from the Royal United Services Institute defence and security-focused think tank, told the committee that watchdogs will need to keep a close eye on technological advancement to ensure their regulations stay up to date.

“Whether you are talking about human rights, the Equality Act or administrative law, there are so many aspects of various different legal frameworks that rely on this whole issue of the professional decision maker, and discretion of the person in a public office, who is making a decision that affects an individual,” he said. “If you start implementing fully automated processes for those decisions that may have some kind of legal effect on the individual, you have to review so many other aspects of the law that it just would not really be workable. I think GDPR is sufficiently tech agnostic that it will remain relevant and viable for the foreseeable future, but those sector-specific organisational policies will need to be continuously reviewed as the technology develops.”

Babuta was asked by committee chair Lord Jonathan Evans about how the law might interact in the specific example of autonomous vehicles.

“Computer scientists may argue with this but, in the eyes of the law, an AI could never pass that mens rea test so it could never be found guilty in a court of law”
Alexander Babuta, RUSI

“If an automated vehicle kills someone, who is responsible? In the eyes of the law, the human being sitting in the seat must always be responsible because we have this fundamental principle of mens rea that in order to be guilty of committing a crime, there has to be some kind of human conscious awareness,” Babuta said. “Computer scientists may argue with this but, in the eyes of the law, an AI could never pass that mens rea test so it could never be found guilty in a court of law.”

The RUSI researcher said that, essentially, the law could respond to AI in one of two ways: it could seek to pre-emptively put in place “the checks and balances and the structures that are needed to account for this new technology”; or it could “wait for the case law to develop and… [for] challenges to go through the courts to let precedent establish itself”.

Working out how best to treat the decisions made by machines clearly adds another layer of complexity to the work of regulators. But more complicated still is the question of what the law can do about the systematic biases and flaws that might have caused an algorithm to make an unfair decision.

Eleonora Harwich, research director for think tank Reform, told PublicTechnology the relevant pieces of legislation that could, theoretically, be used to tackle the issue of bias are the Human Rights Act, the Equality Act, and the EU General Data Protection Regulation – which will, after Brexit, be superseded by its UK equivalent, the Data Protection Act.

“Those are the three pieces of legislation that we could use to try and police bias; but the main thing is that it is no regulator’s responsibility to do so – at least not in the data-driven technology space,” Harwich said. “The Equality Act says you’re not allowed to discriminate against a group. But, by the time you get to breaching the act [as a result of biased data], it is too late. The current approach is that it is up to [technology] manufacturers to police themselves to have an ethical framework.”

Harwich says that, while the government’s Office for Artificial Intelligence could play in role in “promoting certain types of standards”, she does not believe a dedicated AI regulator is a good idea. But, rather, that regulators serving individual sectors, such as financial services, or healthcare, will need to find their own ways of addressing these issues.

Explain speaking
The regulator with the most obvious role to play – the Information Commissioner’s Office – is already collaborating with the UK’s national academic institution for AI and data science: the Alan Turing Institute. 

The two organisations are working together on an initiative dubbed Project ExplAIn

In a report published in June, the ICO said: “As the UK regulator for the GDPR, the ICO understands that, while innovative and data-driven technologies create enormous opportunities, they also present some of the biggest risks related to the use of personal data. We also recognise the need for effective guidance for organisations seeking to address data-protection issues arising from the use of these technologies. In particular, AI is a key priority area in the ICO’s Technology Strategy.”

It added: “Explaining AI decisions is one part of the ICO’s work on exploring the data protection implications of AI.”

The ICO and the Turing are due to publish in the near future an “explainability framework” setting out guidance for data-processing organisations.

“The objective of Project ExplAIn is to produce useful guidance that assists organisations with meeting the expectations of individuals when delivering explanations of AI decisions about them,” they said. “This will support organisations to comply with the legal requirements above, but the guidance will go beyond this. It will promote best practice, helping organisations to foster individuals’ trust, understanding, and confidence in AI decisions.”

MPs on the House of Commons Science and Technology Committee would like to see more than just “guidance” in this area.

In a report published last year, the committee urged the government to consider the introduction of new laws guaranteeing citizens a “right to an explanation” for algorithmic decisions related to their life. The report also floated to idea of going even further, and providing legal recourse for citizens to challenge AI-powered decisions.

“The right to explanation is a key part of achieving accountability,” MPs said. “We note that the government has not gone beyond the GDPR’s non-binding provisions and that individuals are not currently able to formally challenge the results of all algorithm decisions or, where appropriate, to seek redress for the impacts of such decisions. The scope for such safeguards should be considered by the Centre for Data Ethics and Innovation (CDEI) and the ICO in the review of the operation of the GDPR that we advocate.”

In its official response, the government said that the CDEI will “have an ongoing role in reviewing the adequacy of existing regulatory frameworks”. But it did not address the issue of potential rights guaranteeing an explanation and the ability to challenge an algorithmic decision. Nor did it accept the committee’s recommendation that responsibility for algorithms should form part of a ministerial brief.

However government and the law interacts with AI in the years to come, the technology will not regulate itself – despite some of the grandiose claims its advocates are sometimes apt to make.

Harwich from Reform said: “We need to be acutely aware of what it can achieve. The idea that it can abdicate you from your duty to think – from your duty as a minister – no AI will abdicate you from that stuff. It will not absolve you from making decisions.”

 

 

 

 

 

This article forms part of PublicTechnology’s dedicated AI Week, in association with UiPath. Click here to read lots more content on a wide range of issues. We are also hosting an exclusive webinar discussion – in which a panel of private and public sector experts debate all the major issues related to government’s use of AI – is now available to view on demand. Click here to register to do sofree of charge – to view on demand.

 

 

 

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter
ErrorHere