Facebook Is Rating Users Who Flag News Stories As Being False

Facebook is now rating its users’ trustworthiness on a scale of zero to one (decimal score) based on their flagging of news stories as false and is assigning them reputation scores.

This might seem a bit creepy, but the company considers their system a valuable avenue for tackling misinformation.

That is according to The Washington Post, who reports that the rating system was developed over the past year to aid the fight against fake news and identify malicious user accounts.

RELATED: TEENAGER HACKS APPLE & STEALS 90GB OF FILES

Facebook has been relying on individuals to help identify bad content for some time now. But they have since faced problems with people flagging posts as false because they disagree or dislike them.

“It’s not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” Tessa Lyons, the product manager in charge of fighting misinformation said in an interview.

“I like to make the joke that, if people only reported things that were false, this job would be so easy! People often report things that they just disagree with.”

Lyons also noted that a user’s trustworthiness score isn’t considered a barometer as it relates to one’s credibility but rather a measurement among thousands of new behavioral patterns the company presently takes into consideration in an attempt to minimize risk.

“One of the signals we use is how people interact with articles,” Lyons explained in a follow-up email. “For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true.”

For now, it isn’t known what other criteria Facebook takes into consideration in order to grant users a score. It’s also not clear whether or not all users have a score and in how the scores are used.

But the company has since issued a response to the Post’s report, clarifying their position and stating that their process was simply developed to protect people against “indiscriminately flagging news as fake” and trying to fool the system.

“The idea that we have a centralized ‘reputation’ score for people that use Facebook is just plain wrong and the headline in the Washington Post is misleading. What we’re actually doing: We developed a process to protect against people indiscriminately flagging news as fake and attempting to game the system,” a Facebook spokesperson wrote in an email to GIZMODO.

“The reason we do this is to make sure that our fight against misinformation is as effective as possible.”

NEXT: NETFLIX REVEALS SOME CHANGES THAT WILL DIVIDE USERS

Source: Read Full Article