top of page

AI is invading UK policing, but there's little proof it's useful

Police use of AI is unregulated, not transparent, and there's a lack of research. Here's how it could be improved


Police forces around the UK have realised the potential of artificial intelligence – and now they're starting to use it. Tests include facial recognition systems, ways to predict where crime will happen and systems that help with making decisions. But the setup is a complex mess and there's little proof that the technologies are actually benefitting anyone.


"These trials don't appear to be evaluated in a very transparent way," says Alexander Babuta​, a national security and resilience research fellow at the Royal United Services Institute for Defence and Security Studies (RUSI). "Before we move ahead with large scale deployment of this kind of technology we really need more evidence of how it affects decision making in the field."


A new report from RUSI and the University of Winchester has examined police use of machine learning across the UK and has concluded there is little transparency around systems being used and a lack of research around how effective they are. It proposes that new policies should be created that set out how police should use systems and also make sure privacy and human rights are protected.


Police use of facial recognition systems have had the most publicity after producing questionable results. Both London's Metropolitan Police and South Wales Police have been trialling live facial recognition systems that are able to scan crowds to identify faces that match a database of photos. Leicestershire Police has also trialled facial recognition systems. The first real-world trial of the Welsh police force's system saw 2,470 potential matches, of which 2,279 were incorrect. The false positive ratio has since improved.


In London, the Met used its facial recognition systems at both the 2016 and 2017 Notting Hill Carnival (it was dropped from use in 2018). The tech failed to pick out any suspects in the first year and in the second was wrong 98 per cent of the time.


"There didn't seem to be a very clear strategy to how this trial would be conducted and how it would be evaluated," Babuta says of the Met Police's use of the system. The force is continuing to use its facial recognition test in trials and says it will publish an evaluation in the future. The Greater London Authority, which exists to hold London Mayor Sadiq Khan to account, has written to the mayor criticising a lack of "legislative framework and proper regulation" around the technologies.


Despite criticism, facial recognition has had successes. Officers from South Wales Police have tweeted about arrests made using their technology. "No matter where we deploy, Automated Facial Recognition will continue to catch the bad guys," one tweet from December 2017 says.


So what should be done about the expansion of police use of machine learning and AI? The new RUSI report proposes that the Home Office, which oversees policing, should develop codes of practice around experimentation with AI. It also suggests police bodies, such as the College of Policing, should create guidance for telling people impacted by AI-led decision making that the systems have been used.


Importantly the report suggests that machine learning algorithms should always be overseen by humans. "ML algorithms must not be initialised, implemented and then left alone to process data," the report recommends. "The ML algorithm will require constant 'attention and vigilance' to ensure that the predictive assistance provided is as accurate and unbiased as possible, and that any irregularities are addressed as soon as they arise."


Facial recognition is just the tip of the iceberg. For more than five years, Kent Police have used an algorithmic system called PredPol to predict where crimes may take place. The system uses previous crime data – type, time and location – to anticipate where crimes may occur. Initial trials claimed to reduce street violence by six per cent but crime has risen in recent years.


And police in Durham altered one of their algorithms, which helps with decision making, to find out if it was being biased against poor people. Norfolk Police are also trialling a system that analyses burglary data and provides officers with advice on whether they should be investigated further. In both the Norfolk and Durham cases, the machine learning systems are designed to give officers information – the technology does not make final decisions.


"There is a significant lack of research looking at how the use of an algorithm actually affects a police officer's decision making in the field," Babuta says. He questions whether police officers will "blindly" follow the output of an algorithm as they may believe it will be right. Analysis of an algorithm used in the US for predicting crimes has been found to be no better at predicting crimes than random people, a Pro Publica investigation found the algorithm could also be biased against black people.


"There is no clear policy framework at the moment governing how the police should be implementing these tools or even trialling them," Babuta says. "While these tools are being used for quite limited purposes currently, there's potential for the technology to do a lot more. The lack of a clear regulatory and governance framework is worrying."


12 views0 comments
bottom of page