Met Police claims series of Oxford Circus arrests in facial-recognition deployment

London force claims success from use of controversial technology, which was accompanied by public information effort – as well as protestors

Credit: Isabell Schulz/CC BY-SA 2.0

A deployment of live facial recognition technology enabled London’s Metropolitan Police Service to make a series of arrests earlier this month, the force has claimed.

LFR systems were deployed in the Oxford Circus area, where its use was “clearly signposted” and accompanied by “local neighbourhood officers, [who] engaged with the public to explain the technology and hand out leaflets” with more information, according to the Met.

Official records released by the force indicate that the technology was used for two defined: tackling violent crime and other serious offences; and identifying wanted suspects.

On 7 July, facial-recognition cameras at the central London landmark issued four alerts, all of which were recorded as legitimate alerts, and three of which resulted in individuals being arrested shortly afterwards. A woman and a man were each detained and charged with offences related to possession and intent to supply class A drugs; the latter was also wanted on the grounds of a previous failure to appear in court.

Another man – who the Met claims was wanted on a warrant for the assault of an emergency worker – was also arrested.

All three appeared in court the following day.


Related content


Detective chief superintendent Owain Richards said: “Our use of live facial recognition technology has directly helped us to arrest three wanted individuals and officers have been able to successfully remove them from our streets. It is a fantastic result from the deployment and it links to one of the Met’s top priorities of tackling serious violent crime. This innovative technology, alongside our officers, enables us to find people that pose a serious risk to our community so that we can keep the people of London safe.”

Facial recognition was deployed again at Oxford Circus on 14 and 16 July. In all three uses this month, there was a “watchlist” of about 6,700 wanted individuals and, during each deployment, the technology captured about 35,000 people on camera.

Across the latter two days, there was a collective total of four alerts issued by the system – of which two were false, according to the Met’s data. One arrest was made.

Even this patchy success rate is a major improvement on previous LFR trials that took place at Oxford Circus in February 2020, during which seven of the eight alerts issued were recorded as false.

There was then a two-year gap before the Metropolitan Police used the technology again. 

A deployment at Oxford Circus on 28 January this year resulted in 11 alerts – equating to almost one in every 1,000 of the 12,000 people captured on camera that day. Only one alert was false, according to Met data, and four people were arrested.

Conversely, trials that took place in Leicester Square in March did not result in a single alert being issued in relation to the 10,000-plus people seen by the cameras.

Regardless of its success rate, the technology remains highly controversial. It has been the subject of various court challenges – including a case, supported by human rights campaign group Liberty, in which the Court of Appeal ruled that a previous use of the technology by South Wales Police had been unlawful.

The advocacy organisation continues to campaign vociferously against the use of the technology. Other prominent critics include the likes of Big Brother Watch, which organised protests at Oxford Circus last weekend.

At the site of a previous protest led by the group against Metropolitan Police LFR trials in 2019, an east London man was issued with a £90 fine for disorderly behaviour – after choosing to cover his face while passing the cameras.

During the most recent deployment of the technology, the London force said that, as part of the deployment, it also worked with government-owned standards authority the National Physical Laboratory to test the algorithms used by the LFR systems. The aim of the testing process was to “understand more about its accuracy, and any bias shown when deployed in a realistic operational policing environment, [which] will help inform how we continue to use facial technology legally and fairly”.

 

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter
ErrorHere