Artificial intelligence used by UK police to predict crimes amplifies human bias

Artificial intelligence technology used by police forces in the UK to predict future crimes replicates – and in some cases amplifies – human prejudices, according to a new report.

While "predictive policing" tools have been used in the UK since at least 2004, advances in machine learning and AI have enabled the development of more sophisticated systems.

These are now used for a wide range of functions including facial recognition and video analysis, mobile phone data extraction, social media intelligence analysis, predictive crime mapping and individual risk assessment.

However, the report by the Royal United Services Institute (RUSI) warns that human biases are being built into these machine learning algorithms, resulting in people being unfairly discriminated against due to their race, sexuality and age.

One police officer who was interviewed for the report commented that: "Young black men are more likely to be stop and searched than young white men, and that's purely down to human bias.

"That human bias is then introduced into the datasets, and bias is then generated in the outcomes of the application of those datasets."

In addition to these inherent biases, the report points out that individuals from disadvantaged sociodemographic backgrounds are likely to engage with public services more frequently.

As a result, police often have access to more data relating to these individuals, which "may in turn lead to them being calculated as posing a greater risk".

Matters could worsen over time, another officer said, when software is used to predict future crime hotspots.

"We pile loads of resources into a certain area and it becomes a self-fulfilling prophecy, purely because there's more policing going into that area, not necessarily because of discrimination on the part of officers," the officer said.

The report also warns that police forces could become over-reliant on the AI to predict future crimes, and discount other relevant information.

"Officers often disagree with the algorithm. I'd expect and welcome that challenge. The point where you don’t get that challenge, that's when people are putting that professional judgement aside," one officer said.

Bias may also occur in the way that human police officers interprets the algorithm's prediction or insight.

"Professional judgement might just be another word for bias,"another police officer explained, adding: "Whenever we have to decide an outcome there's always an opportunity for bias."

RUSAI's Alexander Babuta told BBC News that there are ways to scan and analyse data for bias and then eliminate it – and some police forces are already exploring these opportunities.

However, he added that: "We need clearer processes to ensure that those safeguards are applied consistently."

The report also states that lessons can be learned from recent trials of live facial recognition , particularly concerning the need to demonstrate an explicit legal basis for the use of new technology.


Source: Read Full Article