The dirt on predictive policing
Law enforcement is increasingly using algorithmic predictive policing to forecast criminal activity. However, such systems often depend on data produced via flawed, racially fraught and sometimes unlawful practices (aka ‘dirty policing’), according to researchers at the AI Now Institute.
And this dirty policing shapes the environment and the methodology by which data is created, which leads to inaccuracies, skews, and forms of systemic bias embedded in the data.
“Predictive policing systems informed by such data cannot escape the legacy of unlawful or biased policing practices that they are built on,” the paper’s authors say.
The institute points to Chicago where dirty data was ingested directly into the city’s predictive system, as well as to New Orleans where extensive evidence of dirty policing practices “suggests an extremely high risk that dirty data was, or will be used. in any predictive policing application”.
The authors add, “any jurisdiction where police have been found to engage in such practices, the use of predictive policing in any context must be treated with scepticism and mechanisms for the public to examine and reject such systems are imperative.”
In the UK, human rights pressure group Liberty is asking police forces to abandon tests predicting where crimes are likely to happen and whether individuals are likely to re-offend. It believes at least 14 forces are testing or developing predictive policing systems.