Since 1996, Section 230 of the Communications Decency Act has been a key legal shield for the tech industry. It protects any “interactive computer service” from liability for the content people post on their platforms. In other words, companies like Facebook and YouTube can’t be sued because of their users’ bad behavior.
- Is YouTube doing enough to fight hate speech?
- 300+ Trump ads taken down by Google, YouTube
Recently, politicians have blamed the law for enabling some of the worst activity on the internet. But tech insiders say the law is misunderstood—and a vital component for how the internet operates.
Why Section 230 exists
When Congress first established Section 230, their goal was not for online platforms to be neutral outlets where anything goes. Rather, they wanted the platforms be able to make the judgments needed to moderate content—without risking liability. That’s according to US Naval Academy cybersecurity professor Jeff Kosseff, who recently wrote the book The Twenty-Six Words that Created the Internet, an in-depth look at the history of Section 230.
Prior to the law, distributors—like a newsstand or bookstore—were liable for what they sold only if they were able to know the material was illegal. Companies that actually produced the material—book publishers or newspapers, for example—were liable because they controlled the content they created.
In the early days of the internet, legal challenges against two service providers, CompuServe and Prodigy, showed the law needed to rethink that distinction when it came to the internet.
CompuServe had decided not to regulate what their users posted, while Prodigy employed moderators to validate content and clean up foul language. Both companies were eventually sued because of content their users posted. CompuServe was found not liable because it was solely a distributor, having no say over what its users posted. Prodigy, however, did not receive the same immunity. Because it actively moderated its content, the court decided it had taken a more editorial role, making its site more like the letter to the editor in a newspaper.
The precedent the suits set at the time was that online platforms could reduce their liability if they did not moderate users’ content. Section 230 was meant to change that.
An addendum to the law known as the “Good Samaritan” clause allows platforms to remove content they find objectionable, even if that speech is legal under the First Amendment. This leaves the policing of content to the discretion of the sites themselves, while still protecting them from liability.
“The government is free to say, ‘Hey, this is how you should enforce hate. This is how you should enforce harassment.’ We would follow those laws. But we don’t see those laws.” – YouTube CEO Susan Wojcicki
Politicians on both sides want it amended
More than two decades later, the law has been tied to the some of the worst corners of the internet, including hateful speech, violent videos, Russian trolls, and revenge porn. Public pressure is increasing to limit the broad leeway online platforms currently have under Section 230, and politicians on both sides of the aisle want to take up the charge.
Democrats say Section 230 has allowed platforms like Facebook to become a place where foreign governments disseminate propaganda without consequence. Republicans argue that, because the law allows companies to judge what content violates their terms of service, they are using it to censor conservative viewpoints.
Several Democratic presidential candidates have stepped into the fray. Entrepreneur and 2020 hopeful Andrew Yang said he would seek to amend the section “to reflect the reality of the 21st century — that large tech companies are using tools to act as publishers without any of the responsibility,” according to a blog post he wrote in November.
“[T]here needs to be some accountability,” Yang said, particularly given the role tech companies play with recommendation algorithms that spread polarizing and false content.
Former Vice President Joe Biden last month also referenced Section 230. “I, for one, think we should be considering taking away the exemption that they cannot be sued for knowingly engaging … in promoting something that’s not true,” Biden said in a CNN town hall.
Sen. Josh Hawley, a Republican from Missouri, in June introduced a bill that would change the way online platforms are protected from liability under Section 230.
Under the proposed law, social media companies would be required to obtain a government certification from the Federal Trade Commission verifying “that it does not moderate information on its platform in a manner that is biased against a political party, candidate, or viewpoint.”
Only sites found by the Federal Trade Commission to monitor content in a “politically neutral” manner would be able to keep their liability protection.
“With Section 230, tech companies get a sweetheart deal that no other industry enjoys: complete exemption from traditional publisher liability in exchange for providing a forum free of political censorship,” Hawley said in a June press release. “Unfortunately, and unsurprisingly, big tech has failed to hold up its end of the bargain.
Opponents have questioned the constitutionality of the bill, which is stalled for now.
Where YouTube’s CEO stands
YouTube’s CEO Susan Wojcicki highlighted the importance of Section 230 in shaping today’s online experience.
“It’s basically enabled the internet as we know it,” Wojcicki said. “It’s enabled us to have people upload content, not have every single comment be reviewed, not every single video be reviewed. And so, it has enabled new types of communication, new types of community, new types of content that we just wouldn’t have had beforehand.”
The freedom of that open platform enables users to upload 500 hours of video to YouTube every minute, according to company estimates. But with all that content, YouTube’s system for monitoring has come under scrutiny.
During the 2016 presidential campaign, YouTube failed to detect more than 1,100 videos that Russian trolls posted, almost all intended to influence African Americans. When a white supremacist in March killed dozens of Muslins in Christchurch, New Zealand, he live-streamed the video on Facebook. That video was then uploaded on YouTube tens of thousands of times.
Wojcicki said the Good Samaritan clause of Section 230 enables her employees to remove hateful content like the Christchurch shooter from her site. She also said that, if Congress were to pass further laws limiting what content YouTube can host, the company would comply.
“Honestly, if there [were] laws that said, ‘This is the type of content you can’t have,’ then we remove it…” Wojcicki said. “We are making a decision to be responsible because we think it’s important for our society right now. And we’re allowed to do that because of Section 230. And so, the government is free to say, ‘Hey, this is how you should enforce hate. This is how you should enforce harassment.’ We would follow those laws. But we don’t see those laws. Those laws aren’t out there right now.”
To watch Lesley Stahl’s full 60 Minutes interview with YouTube’s Susan Wojcicki, click here.
The video above was produced by Will Croxton and Brit McCandless Farmer. It was edited by Will Croxton.
Source: Read Full Article