UK police have been using live facial recognition (LFR) technology for the best part of a decade, with the Met being the first force to deploy it at Notting Hill Carnival in 2016.
Since then, the use of the biometric surveillance and identification tool by the Met has ramped up considerably. While the initial deployments were sparse, happening only every few months, they are now run-of-the-mill, with facial recognition-linked cameras regularly deployed to events and busy areas of London.
Similarly, while South Wales Police (SWP) – the only other force in England and Wales to have officially deployed the “live” version of facial recognition – used the technology much more extensively than the Met during its initial roll-outs through 2017, it is now deploying it with much more frequency.
From the police’s perspective, the main operational benefits of facial recognition include the ability to find people they otherwise would not be able to (whether that be for safeguarding or apprehending offenders), and as a preventative measure to deter criminal conduct.
Almost immediately, however, the technology proved controversial. Out of the gate, police facial recognition was derided for having no firm legal basis, poor transparency and questionable accuracy (especially for women and those with darker skin tones), all while being rolled out with zero public or Parliamentary debate.
The Met’s choice to first deploy the technology at Carnival – the biggest Afro-Caribbean cultural event in Europe and the second-largest street carnival in the world outside of Brazil – also attracted criticisms of institutional racism.
In the case of SWP, its use of live facial recognition against activists protesting an arms fair in Cardiff eventually led to a legal challenge.
In August 2020, the Court of Appeal concluded that SWP’s use of the tech up until that point had been unlawful, because the force had failed to conduct an appropriate Data Protection Impact Assessment (DPIA) and comply with its Public Sector Equality Duty (PSED) to consider how its policies and practices could be discriminatory.
Although the court also concluded that SWP had violated the privacy rights of the claimant, the judgement ultimately found the problem was with how the technology had been approached and deployed by police, rather than a particular problem with the technology itself.
In this essential guide, learn about how the police have been approaching the technology, the ongoing concerns around its proportionality, necessity and efficacy, and the direction of travel set for 2024 and beyond.
What is facial recognition?
While LFR has received the most public attention and scrutiny, other facial recognition techniques have also started gaining popularity among UK law enforcement.
With LFR, the technology essentially acts as a biometric police checkpoint, with a facial recognition-linked camera scanning public spaces and crowds to identify people in real time by matching their faces against a database of images compiled by police.
Otherwise known as a “watchlist”, these databases are primarily comprised of custody photos and can run into thousands of images for any given LFR deployment, but are deleted after each operation along with any facial images captured during.
The second technique is retrospective facial recognition (RFR). While it works in a similar fashion to LFR by scanning faces and matching them against a watchlist, RFR can be applied to any already-captured images retroactively.
Unlike LFR, which is used overtly with specially equipped cameras atop a visibly marked police van, RFR use is much more covert, and can be applied to footage or images behind closed doors without any public knowledge the surveillance has taken place.
Critics are particularly concerned by the increasing use of this technology, because the sheer amount of image and video-capturing devices in the modern world – from phones and social media to smart doorbell cameras and CCTV – is creating an abundance of material that can be fed into the software.
There is also concern about what its operation at scale means for human rights and privacy, as it smooths out the various points of friction that have traditionally been associated with conducting mass surveillance.
Looking at operator-initiated facial recognition (OIFR), the newest iteration of facial recognition being rolled out for UK police, the technology works via an app on officers’ phones that allows them to automatically compare the photos they’ve taken out in the field with a predetermined watchlist.
While national plans to equip officers with OIFR tools were only announced by UK police chiefs in November 2023, South Wales, Gwent and Cheshire police are already conducting joint trials of the tech.
Why is facial recognition so controversial?
A major question hanging over the police’s use of facial recognition is whether it is actually necessary and proportionate in a democratic society, especially given the lack of public debate about its roll-out.
Before they can deploy any facial recognition technology, UK police forces must ensure their deployments are “authorised by law”, that the consequent interference with rights (such as the right to privacy) is undertaken for a legally “recognised” or “legitimate” aim, and that this interference is both necessary and proportionate. This must be assessed for each individual deployment of the tech.
For example, the Met’s legal mandate document – which sets out the complex patchwork of legislation that covers use of the technology – says the “authorising officers need to decide the use of LFR is necessary and not just desirable to enable the MPS to achieve its legitimate aim”.
Responding to questions about how the force decided each individual deployment was both necessary and proportionate, the Met has given the same answer to Computer Weekly on multiple occasions.
“The deployment was authorised on the basis of an intelligence case and operational necessity to deploy, in line with the Met’s LFR documents,” it said, adding in each case that “the proportionality of this deployment was assessed giving due regard to the intelligence case and operational necessity to deploy, whilst weighing up the impact on those added to the watchlist and those who could be expected to pass the LFR system”.
However, critics have questioned whether scanning tens of thousands of faces every time LFR is used is both a necessary and proportionate measure, particularly when other, less intrusive methods are already available to police.
While there are a number of legally recognised purposes (such as national security, prevention of disorder or public safety) that state authorities can use to intrude on people’s rights, proportionality and necessity tests are already well established in case law, and exist to ensure these authorities do not unduly interfere.
“In the case of police, they’re going to say ‘it’s prevention of disorder or crime, or public safety’, so they get past first base, but then one of the questions is, ‘is this necessary in a democratic society?’” said Karen Yeung, an interdisciplinary professorial fellow in law, ethics and informatics at Birmingham Law School.
“There’s a very rich case law about what that means, but the core test is you can’t use a hammer to crack a nut. Even though a machete might be perfectly good for achieving your task, if a pen knife will do, then you can only use the pen knife, and the use of a machete is unlawful because it’s disproportionate … the basic way of explaining it is that it has to go no further than necessary to achieve the specified goal.”
In the case of RFR, while it has its own separate legal mandate document, there are similarities in the need to establish the purpose and grounds of every search made with the software, as well as the proportionality and necessity of doing so in each case.
There is currently no legal mandate published for OIFR tools, but police chiefs have said this version of the tech won’t be rolled out to forces until sometime in 2024.
Is facial recognition biased or discriminatory?
Closely linked with necessity and proportionality, there is also the question of who the cameras are ultimately aimed at and why. This in turn brings up questions about bias and discrimination, which from the police and government perspective can be solved via improved algorithmic accuracy.
When LFR first began being deployed by UK police, one of the major concerns was its inability to accurately identify women and people with darker skin tones, which led to a number of people being wrongly identified over its first few years of deployment.
However, as the accuracy of the algorithms in use by UK police has improved, the concerns have shifted away from questions of algorithmic bias towards deeper questions of structural bias in policing, and how that bias is reflected in its technology practices.
Civil society groups maintain, for example, that the technology is “discriminatory and oppressive” given repeated findings of institutional racism and sexism in the police, and that it will only further entrench pre-existing patterns of discrimination.
Others have argued the point further, saying that accuracy is a red herring. Yeung, for example, has argued that even if LFR technology gets to the point where it is able to identify faces with 100% accuracy 100% of the time, “it would still be a seriously dangerous tool in the hands of the state”, because “it’s almost inevitable” that it would continue to entrench existing power discrepancies and criminal justice outcomes within society.
How do facial recognition watchlists work?
Watchlists are essentially images of people’s faces that facial recognition software uses to determine whether someone passing the camera is a match. While images can come from a range of sources, most are drawn from custody images stored in the Police National Database (PND).
Given the well-documented disproportionality in policing outcomes across different social groups in the UK, the concern is that – in using historic arrest data and custody images to direct where facial recognition should be deployed and who it’s looking for respectively – people from certain demographics or backgrounds then end up populating the watchlists.
“If you think about the disproportionality in stop and search, the numbers of black and brown people, young people, that are being stopped, searched and arrested, then that starts to be really worrying because you start to get disproportionality built into your watchlists,” London Assembly member and chair of the police committee, Caroline Russell, previously told Computer Weekly.
Further, in their appearances before a Lords committee in December 2023, senior officers from the Met and SWP confirmed to the Lords that both forces use generic “crime categories” to determine targets for their LFR deployments.
This means watchlists are selected based on the crime type categories linked to images of people’s faces (which are mostly custody images), rather than based on intelligence about specific individuals that are deemed a threat.
Another issue with the watchlists is the fact that millions of these custody images are held there completely unlawfully, meaning people never convicted of a crime could potentially be included.
In 2012, a High Court ruling found that its retention of custody images was unlawful because unconvicted people’s information was being kept in the same way as those who were ultimately convicted. It also deemed the minimum six-year retention period to be disproportionate.
While the Met’s LFR Data Protection Impact Assessment (DPIA) says that “all images submitted for inclusion on a watchlist must be lawfully held by the MPS”, millions of custody images are still being unlawfully retained.
Writing to other chief constables to outline some of the issues around custody image retention in February 2022, the NPCC lead for records management, Lee Freeman, said the potentially unlawful retention of an estimated 19 million images “poses a significant risk in terms of potential litigation, police legitimacy and wider support and challenge in our use of these images for technologies such as facial recognition”.
In November 2023, the NPCC confirmed to Computer Weekly that it has launched a programme that (while still not yet publicised) will seek to establish a management regime for custody images, alongside a review of all currently held data by police forces in the UK. This will be implemented over a two-year period.
Is facial recognition effective?
Outside of these issues, there are open questions about the effectiveness of facial recognition in policing.
Speaking with Computer Weekly, the former biometrics and surveillance camera commissioner, Fraser Sampson, for example, questioned the ways facial recognition has been deployed by police, noting the thinness of the evidential basis around its effectiveness in tackling serious crimes.
He said that on the one hand, there are arguments from critics that UK police “never really seem to catch anyone significant using it, let alone very dangerous or high-harm offenders”, but on the other, those in policing will argue this is because it has been deployed so infrequently on relatively so few people, “we’re not going to have very spectacular results, so therefore, we’ve got to use it more to prove the case more”.
Given the Home Office’s repeated claim that LFR is a valuable crime prevention tool capable of stopping terrorists, rapists and other violent offenders, others have also questioned its effectiveness for this stated purpose given the majority of arrests made are for other offences, such as drug possession, not appearing in court or traffic violations.
Sampson has said the overt nature of the deployments – whereby police forces are required to publicly state when and where they are using it – can also hinder effectiveness, because it means wanted people will simply avoid the area.
He added that the argument then becomes about making the capability more covert to avoid this pitfall: “Then it becomes very sinister … you can’t just avoid one town, because it could be looking for you anywhere. The use case has made itself on that argument.”
Sampson further challenged the technology’s crime prevention capabilities on the basis that authorities are largely relying on its chilling effect, rather than its actual effectiveness in identifying wanted individuals. He said the logic here is that people “might behave” if they know the police have a certain capability and might be using it.
“It’s really challenging for the police then to find the evidence that it can work when used properly, without having to throw away all the safeguards to prove it, because once they’re gone, they’re gone,” said Sampson.
Is facial recognition legal?
There is no dedicated legislation in the UK to manage the police use of facial recognition technologies.
According to the Met Police’s legal mandate for LFR, the tech is regulated by a patchwork of the Police and Criminal Evidence Act (PACE) 1984; the Data Protection Act 2018; the Protection of Freedoms Act 2012; the Equality Act 2010; the Investigatory Powers Act 2000; the Human Right Act 1999; and common law powers to prevent and detect crime.
“These sources of law combine to provide a multi-layered legal structure to use, regulate and oversee the use of LFR by law enforcement bodies,” it says.
While the mandate also specifically references the Surveillance Camera Code of Practice as one of the “secondary legislative instruments” in place to regulate LFR use, the code is set to be abolished without replacement under the UK government’s data reforms.
Both Parliament and civil society have repeatedly called for new legal frameworks to govern law enforcement’s use of biometrics – including an official inquiry into police use of advanced algorithmic technologies by the Lords Justice and Home Affairs Committee (JHAC); two of the UK’s former biometrics commissioners, Paul Wiles and Fraser Sampson; an independent legal review by Matthew Ryder QC; the UK’s Equalities and Human Rights Commission; and the House of Commons Science and Technology Committee, which called for a moratorium on LFR as far back as July 2019.
During his time in office before resigning in October 2023, Sampson also highlighted a lack of clarity about the scale and extent of public space surveillance, as well as concerns around the general “culture of retention” in UK policing around biometric data.
Throughout the JHAC inquiry – which described the police use of algorithmic technologies as a “new Wild West” characterised by a lack of strategy, accountability and transparency from the top down – Lords heard from expert witnesses that UK police are introducing new technologies with very little scrutiny or training, continuing to deploy them without clear evidence about their efficacy or impacts, and have conflicting interests with their own tech suppliers.
In a short follow-up inquiry, this time looking exclusively at facial recognition, the JHAC found that police are expanding their use of LFR without proper scrutiny or accountability, despite lacking a clear legal basis for their deployments. The committee also specifically called into question whether LFR is even legal.
The committee added that, looking to the future, there is a real possibility of networked facial recognition cameras capable of trawling entire regions of the UK being introduced, and that there is nothing in place to regulate for this potential development.
Despite myriad calls for a new legislative framework from different quarters, government ministers have claimed on multiple occasions that there is a sound legal basis for LFR in the UK, and that “a comprehensive network of checks and balances” already exists.
What are police doing next with facial recognition?
Despite open questions about the legality of police facial recognition tools, the UK government has not been deterred from pushing for much wider adoption of the technology.
In November 2023, for example, the National Police Chief Council’s (NPCC) chair Gary Stephens noted it would play a “significant role” in helping UK policing become “an effective science-led service”.
In May 2023, an interim report into upcoming UK government data reforms revealed that policing minister Chris Philp was pushing for facial recognition technology to be rolled out by police forces across England and Wales, and will likely push to integrate the tech with police body-worn video cameras.
He later wrote to police chiefs in October 2023 setting out the importance of harnessing new technologies for policing, urging them to double the amount of RFR searches they are conducting and deploy LFR much more widely.
At the start of the same month, Philp, speaking at a fringe event of the Conservative Party Conference, outlined his plans to integrate data from the PND, the Passport Office and other national databases with facial recognition technology to help catch shoplifters and other criminals.
The plan was met with criticism from campaigners, academics and Scottish biometrics commissioner Brian Plastow, who said the “egregious proposal” to link the UK’s passport database with facial recognition systems is “unethical and potentially unlawful”.
Going forward, there are major concerns about what the UK government’s proposed data reforms mean for police technologies like facial recognition.
Some have argued, for example, that the forthcoming Data Protection and Digital Information Bill will weaken oversight of the police’s intrusive surveillance capabilities if enacted as is, because it would abolish the surveillance camera code of practice and collapse facial recognition into a mere data protection issue under the purview of the ICO.