Chatbot designed to deter people from viewing child sexual abuse online by ‘engaging them in friendly conversation’ is under development by the IWF
- The popup chatbot will detect if somebody is about to view child abuse images
- It will then engage them in friendly and supportive conversation before they do
- The bot will be able to refer people to the Lucy Faithfull Foundation for support
- Experts hope this will help prevent people from engaging in dangerous activity
A chatbot designed to spot when people try to view child abuse images and ‘try to deter them’ is being developed by the Internet Watch Foundation (IWF).
The automated feature will pop up to talk users out of accessing content before they actually commit a criminal offence, engaging them in a friendly conversation.
Those it engages in a ‘supportive conversation’ will then be referred to the Lucy Faithfull Foundation for help to change and control their behaviour.
The IWF, which is responsible for finding and removing images of child sexual abuse online, saw a 50 per cent increase in reports during the coronavirus lockdown.
The reThink Chatbot is planned to be fully working and rolled out by the end of 2022 as part of renewed efforts to engage people before they commit a criminal act.
A chatbot designed to spot when people try to view child abuse images and ‘try to deter them’ is being developed by the Internet Watch Foundation (IWF). Stock image
Exact details of how the chatbot will work or how it will get on to the machines of people looking for abuse images have not been revealed by the team as it is still in development.
The IWF said the move will allow it to take a more proactive approach to the growing problem of people searching for child sexual abuse images and videos online.
‘This chatbot really will be a remarkable tool in helping us tackle the growing problem of online child sexual abuse material,’ said Susie Hargreaves, IWF CEO.
‘It has the potential to be a game-changing way to intervene on people who may be about to set off on a dangerous path online.
‘We remove millions of images and videos of children suffering the worst kinds of abuse every year, but we know this is a battle that needs to be fought on two fronts.’
‘If we can tackle the demand for this material, it could stop some of these videos from being made in the first place,’ explained Hargreaves, who said doing so could mean children are ‘spared horrific sexual abuse, rape and torture’.
It comes as the National Crime Agency (NCA) warned earlier this year that it believes there are a minimum of 300,000 individuals in the UK posing a sexual threat to children.
‘The NCA is pleased to be part of an advisory group for this collaborative project,’ said Damian Barrow, senior manager of the darkweb unit at the NCA.
‘The NCA welcomes interventions of this type to stop people from moving from risky behaviour to illegal behaviour.’
The automated feature will pop up to talk users out of accessing content before they actually commit a criminal offence, engaging them in a friendly conversation. Stock image
Technical Projects Officer at the IWF, Joe Andaya, said it could stop people making the leap from legal pornography to searching for illegal content.
“Imagine a young man or woman searching for images on the internet, starting with pornography, but moving on to search for more extreme pornography,’ he said.
‘They may then start searching for images with young people in it, in sexual situations. The aim is for our chatbot to target these users at that moment before they actually commit a criminal offence.’
In 2019, the IWF had a record year, with analysts processing 260,400, up from 229,328 reports in 2018.
Of these reports, 132,700 showed images and/or videos of children being sexually abused. This compares to 105,047 reports of child sexual abuse material in 2018.
This has been accelerated during the coronavirus crisis, according to the IWF.
There are growing fears that young British men could be driving the online trade in criminal images of child sexual abuse.
Source: Read Full Article