Twitter CEO Jack Dorsey is open to making major changes to the platform he co-founded 12 years ago in an effort to make what has become the world’s public square a safer place for healthy conversations.
The 41-year-old tech executive, who also heads financial tech company Square, said the company is examining all aspects of its product to consider what behaviors Twitter incentivizes and whether those behaviors foster healthy conversations.
“We believe that we can only serve the public conversation, we can only stand for freedom of expression if people feel safe to express themselves in the first place,” Dorsey said during Wired magazine’s 25th-anniversary event on Monday in San Francisco. “What is the effect [of the tools Twitter is building] on making it easier to weaponize freedom of expression against someone so that they don’t feel safe to express themselves or ultimately they’re silenced — which would go against the goal of serving the public conversation.”
Twitter has come under fire since the 2016 election for its role in facilitating the Russian misinformation campaign along with the general proliferation of hate speech and bots. To be fair, the microblogging platform has cracked down on fake or bot accounts, deleting them at a rate of 1 million per day recently.
However, avid users said the platform enables abuse and harassment long before the headlines were dominated by Russian trolls. In 2014, writer and technologist Sydette Harry, along with several other prominent users on the platform, observed a sustained pattern of harassment, abuse and bot-like behavior that had its origins on 4-chan and targeted primarily black women. That was happening around the same time as Gamergate, which targeted women in the gaming community for harassment and the leaking of nude photographs of celebrities.
A newly released tool from the New America Foundation and the Anti-Defamation League has created a dashboard to track a sample of 1,000 Twitter accounts that “show hateful content directed against protected groups.” According to the project’s methodology, researchers found 40 accounts showing hateful content and then algorithmically generated a bigger dataset from that. However, like a similar dashboard used to track Russian influence operations, they have not published the list of accounts being tracked.
One way to flush out the toxic behaviors or fake accounts is through artificial intelligence, which Dorsey said the company needs to use more.
“We have been behind in this regard [with AI]. Our enforcement system mainly relies on reports, which unfairly puts the burden on the victims of any abuse or harassment so we will be implementing more AI,” Dorsey said, before adding that he was concerned about the “explainability” of the algorithms that govern AI.
Dorsey, who said that users should “always be questioning Twitter,” also told Wired that he wants to combat filter bubbles and is open to a range of solutions in order to do so. That includes examining the Like button, how Followers are presented and potentially allowing users to follow topics and events — as opposed to accounts — to see a more diverse range of viewpoints.
“Right now we’re not giving them the tools to break down the filter bubble,” Dorsey explained. “How do we incentivize healthy behaviors and help people choose things that might contribute more to a global conversation?”
Source: Read Full Article