The emergence of predictive policing – the national issue requiring a national response

  • 9 September 2020
  • General, Technology
  • Tom McNeil

Serious crime is, sadly, a reality of the world and the UK is no special case in this regard. Organised criminality, modern day slavery and cybercrime are major challenges of the day. Many of the longer-term solutions to crime have been recognised for generations; tackling root causes such as abusive childhoods, exclusion from high quality education and employment opportunities and a range of other triggers – unsupported mental health problems, substance misuse and poverty continue to be recognised. However, as things stand today, we still need police forces that are equipped to deal with the crime that is taking place right now; the crime society has failed to prevent. 

Being sufficiently equipped will undoubtedly involve technological developments and, in some cases, artificial intelligence (AI). If any such innovation is to meet the high bar of being just and deserving of public trust, ethics must be at its heart; in design, in implementation and throughout the technology’s life.

In recent months, the second largest force in England, West Midlands Police (WMP), has courageously presented to the world much of its work on AI, despite the attention that was likely to follow. The driver of this heightened transparency, was the requirement to publish and disseminate all project proposals, minutes and advice from the data ethics committee established by the West Midlands Police & Crime Commissioner (PCC). In opening up about this work, it has offered sight of its controversial proposals around a youth violence prediction tool. It has publicly shared its efforts to use AI to link intelligence reports on modern day slavery; helping build a clearer picture of a complex crime that might otherwise go undetected due to seemingly disparate intelligence reports. Plans to try and predict those individuals who might engage in serious reoffending, have also been shown to the world. As a result, WMP has undergone a second wave of public scrutiny through related and repeated appearances in the national media and press.

It is quite right that the force has laid bare these developments, shared the ethics committee’s robust feedback and opened itself up to public critique – it has a duty to be transparent. Nonetheless, its deep commitment to listening, reflecting and having all its AI proposals publicly dissected by a committee of attentive data, human rights and ethics experts is commendable.

Given its genuine efforts to embrace technology and modernise as a force, WMP has also found itself leading national AI projects on behalf of the Home Office and other large police forces. Coincidently then, WMP’s general proactivity in exploring innovation has also meant certain national projects have gone through a meaningful ethics process within the West Midlands. The reality is that this ethics oversight has sometimes raised inconvenient concerns that require the data scientists go back to the drawing board, as has recently been reported in Wired and The Times

Key to the committee’s deliberations are that it is not simply enough for AI technology to be accurate, tested and free of racial or other bias, but its use must be proportionate and evaluated against the real-world social context. The Lammy Review, among many other influential and expert commentators, have highlighted some serious concerns around institutional racism in the criminal justice system. These see certain groups and demographics disproportionately handed more punitive responses to their crimes. Many young people for instance engage in low level offending because they are struggling to access real opportunity – they are desperate. An AI tool, whether facial recognition or otherwise, would not benefit society if it was used to more efficiently round up vulnerable young people and enter them into a system with known flaws. At the same time, would we should not turn our noses up at accurate technology targeted at preventing human traffickers or illegally transporting guns into the country.

Many of the civil society activists I have spoken with are cautiously delighted to see this level of transparency take place. They are equally pleased to see that the committee has ‘teeth’; addressing a commonly cited concern over weak committees. One major question however, remains on the tip of their tongues – what about the other 42 police forces in England and Wales? 

The recent Court of Appeal decision in R (Bridges) versus the Chief Constable of South Wales Police has shown the need for clearer regulation and statutory safeguards around the use of facial recognition technology on our streets. This is probably the most prominent contemporary AI debate, and there is good reason for believing legal reform may be necessary. Until then though, who is overseeing and publicly assessing the vast array of different AI products and predictive tools being touted to police forces across the country?

We at the PCC’s Office  argue that a new National Ethics Institute is required. There are multiple ways to do this, but based on the principles of the West Midlands data ethics committee, we believe now is the time to have a visible, empowered and authentic new body. It is this that can help police forces traverse the new technological era and the difficult societal debates that accompany their advancements and implementation. Placing transparency, human rights and diversity of perspective at the heart of policing AI, serves to simultaneously improve the technology, ensure its goals are consistent with a fair society and build public trust and legitimacy. For these reasons, we are kick-starting the conversation about what such an Institute would look like, and how it properly engages with civil society and the public. Equally, we enter this arena looking to offer police agencies assurance that an Institute of this kind must also understand the significant and constantly shifting challenges the police face in keeping the public safe.

Tom McNeil is a lawyer and is the Strategic Adviser to the West Midlands Police & Crime Commissioner, as well as sitting on his Strategic Board. He leads on a range of policy areas, including the ethics oversight of policing artificial intelligence. Tom also sits on a range of boards including the National Union of Students and various public sector strategic boards with a particular focus on supporting vulnerable children. With prior degrees from Durham and Cambridge, he is currently also a PhD researcher at the University of Birmingham in social policy.