WhatsApp is doubling down on its efforts to fight misinformation on the app.
The Facebook-owned messaging service is testing a new feature that helps users verify whether or not an image is real.
The ‘search image’ tool would let users take an image found on WhatsApp and run it through Google.
Scroll down for video
WhatsApp is doubling down on its efforts to fight misinformation. The Facebook-owned messaging firm is testing a new feature that helps users verify whether or not an image is real
Part of the release notes for WhatsApp’s latest update, version 2.19.73, mentioned a ‘search by image’ function,’ according to WABetaInfo, which first spotted the tool.
After clicking the ‘search by image’ button in the app, Google will tell users if a ‘similar or equal’ image exists on the web.
It’s not yet clear when the feature will be available for all users.
The move marks the latest example of WhatsApp taking steps to curb the spread of fake news on its platform.
Over the past several months, WhatsApp has banned bot and spam accounts, with its software capable of wiping some 2 million accounts each month.
The company said last month that it’s often able to remove fake accounts before they even make it onto the platform.
‘Search image’ would let users take an image found on WhatsApp and run it through Google, but it’s not yet clear when the tool will be released Pictured is a WhatsApp ad that ran in India
Additionally, in perhaps its most sweeping move yet, WhatsApp moved to limit the number of times users can share messages with others.
The new policy, revealed in January, limits message forwarding to up to five users.
Before the new policy, users could forward messages up to 20 times.
Last July, the Facebook-owned social media app began limiting message forwarding to up to five times for users in India, but it has since expanded the policy globally.
The company said the move was initiated to fight ‘misinformation and rumors.’
WhatsApp, which has around 1.5 billion users, has been trying to find ways to stop misuse of the app, following global concern that the platform was being used to spread fake news, manipulated photos, videos without context, and audio hoaxes, with no way to monitor their origin or full reach.
Forwarding limits were put into place in India after the spread of rumors on social media led to killings and lynching attempts.
A horrific spate of lynchings led to WhatsApp announcing limits on the forwarding of messages by its 200 million Indian users in July, 2018.
More than 20 people accused of child kidnapping and other crimes in viral messages spread via the app were butchered by mobs in the preceding two months.
In India people forward on WhatsApp more messages, photos, and videos, than any other country in the world, the company says.
One incident saw a 27-year-old software engineer beaten to death by a crowd of more than 2,000 people in the southern state of Karnataka after he and his friends offered chocolates to local children.
Police arrested more than 48 people they said were part of a mob that killed the tech industry worker in southern India over suspicions that he and a group of friends were child abductors.
Fatal attacks have also been carried out on Muslims by ‘cow protection’ groups roaming highways and inspecting livestock trucks. Cows are sacred to the majority Hindu community.
Indian authorities did launch awareness campaigns and patrols and imposed internet blackouts in some areas but the impact was limited.
One official ‘rumour buster’ was himself beaten to death in the north-east in June.
Source: Read Full Article