Facebook to curb private groups spreading hate and misinformation

Facebook prepares to remove health groups and those tied to violence from recommendations in crackdown on misinformation and hate speech

  • Facebook is making efforts to combat misinformation and violence on the site 
  • The firm is now removing health groups from the platform’s recommendations 
  • It says users should get health information from authoritative sources
  • Groups tied to violence are also removed and will be reduced on news feed soon 
  • The move comes as Facebook has been unable to monitor individual groups 

Facebook is cracking down on private groups where hate or misinformation is shared among members.

The firm announced it will no longer show health groups in its recommendations, saying it was crucial that people get health information from ‘authoritative sources.’

Misleading health content has racked up an estimated 3.8 billion views on Facebook over the past year, peaking during the coronavirus pandemic, advocacy group Avaaz said in a report. 

Along with health groups, those found tied to violence will also be removed from searches and recommendations, and Facebook is set to reduce their content in news feed in the near future.

The move comes as the social media firm is under pressure for misinformation on its platform, but is unable to properly monitor individual groups.

Scroll down for video 

Facebook is cracking down on private groups where hate or misinformation is shared among members. The firm announced it will no longer show health groups in its recommendations, saying it was crucial that people get health information from ‘authoritative sources’

Facebook vice president of engineering Tom Alison said in a blog post: ‘People turn to Facebook Groups to connect with others who share their interests, but even if they decide to make a group private, they have to play by the same rules as everyone else,’ Facebook vice president of engineering Tom Alison said in a blog post.

Alison said Facebook’s community standards ‘apply to public and private groups, and our proactive detection tools work across both.’

Facebook uses artificial intelligence to automatically scanning posts, even in private groups, taking down pages that repeatedly break its rules or that are set up in violation of the social network’s standards.

‘Over the last year, we removed about 1.5 million pieces of content in groups for violating our policies on organized hate, 91% of which we found proactively,’ Alison wrote.

Along with health groups, those found tied to violence will also be removed from searches and recommendations, and Facebook is set to reduce their content in news feed in the near future

‘We also removed about 12 million pieces of content in groups for violating our policies on hate speech, 87% of which we found proactively.’

Facebook last month said it has removed hundreds of groups tied to the far-right QAnon conspiracy theory and imposed restrictions on nearly 2,000 more as part of a crackdown on stoking violence.

Under rules tightened on Thursday, administrators or moderators of groups taken down for rule-breaking will be temporarily blocked from forming new groups at Facebook.

People tagged for violating social network standards in groups will need to get moderator or administrator permission for any new posts for 30 days, and if what is cleared for sharing continues to break the rules the entire group will be removed, according to Alison.

Facebook will also start ‘archiving’ groups that been without administrators for a long time, meaning they still exist but don’t appear in searches and members can’t post anything.

And, to promote getting information from authoritative sources, Facebook will no longer show health-themed groups in recommendation results.

Facebook has been struggling with hoaxes and misinformation about the coronavirus pandemic, seeking to give users well-sourced information about the health emergency.

Avaaz, which is a non-profit, revealed in August that 84 percent of posts on Facebook with false claims and advice on medical information have gone unlabeled to warn users of the misleading content.

Not only did Facebook’s algorithm misidentified the posts, but the content generated an estimated 3.8 billion views across five countries in the last year.

The report notes that total estimated views peaked in April, when the coronavirus began to take hold around the world and Facebook executives, including CEO Mark Zuckerberg, promised to combat information.

Source: Read Full Article