Section 230 of the Communications Decency Act provides liability protection to websites and apps (interactive computer services) for third-party content shared on their services, and effectively allows, but does not require, these services to moderate user-generated content posted to their sites.
There are currently several legislative efforts to reform Section 230, likely as a result of the proliferation of online hate speech, concerns across the political spectrum that social media platforms have failed to manage disinformation or provide transparency about political advertising, and allegations by the Trump administration (and other Republicans) that their content has been unfairly targeted or removed by social media platforms.
Defenders of Section 230’s broad protections say the law has been responsible for the internet’s economic success and is necessary to allow the online economy to continue to grow. The Department of Justice (DOJ) recently released its legislative proposal to amend Section 230, and a number of bills that would revise Section 230 are pending. Both presidential candidates have also indicated an interest in modifying the law.
Key takeaways
- The DOJ released its legislative proposal to amend Section 230, which would significantly alter the liability shields in 230(c)(1) and 230(c)(2). The proposal would eliminate a platform’s ability to remove content that the platform believes is merely “otherwise objectionable” but does not meet another criterion for removal, and would impose new “good faith” requirements for liability protection. The proposal also would expose platforms to civil liability from claims brought by federal agencies and would require platforms to implement content moderation policies and update their terms of service.
- At the direction of the president, the National Telecommunications and Information Administration filed with the Federal Communications Commission (FCC) a petition for rule-making asking the FCC to propose regulations that would clarify the meanings of “objectionable content” and “good faith,” and to mandate certain transparency disclosures.
- Section 230 was also the topic of a recent White House forum and a House Energy & Commerce Committee hearing. At least six bills seeking to reform Section 230 (specifically, the liability shield for platforms) are pending in Congress.
DOJ Proposal
At issue are the liability protections granted by Section 230(c). That section grants interactive computer services two types of immunity—Section 230(c)(1) immunity for third-party content posted to their sites and Section 230(c)(2) immunity for choosing to take down, block or otherwise restrict user-generated content (provided the content is taken down in “good faith”). The DOJ’s proposal seeks to limit both types of liability shields.
The DOJ’s proposal would eliminate a platform’s ability to remove content that the platform believes is “otherwise objectionable” and instead would require a platform to have an “objectively reasonable belief” that the content it removes is “obscene, lewd, lascivious, filthy, excessively violent, promoting terrorism or violent extremism, harassing, promoting self-harm or is unlawful.” The DOJ argues that its edits are meant to clarify a vague term (“objectionable”), but the proposal arguably takes the flexibility to exercise editorial control and judgment away from platforms and could lead to protracted legal battles over what the objectively reasonable belief standard means.
The DOJ is also seeking to clarify the meaning of good faith by adding a statutory definition that would limit immunity for most content moderation decisions. To be viewed as moderating content “in good faith,” platforms would need to have plainly drafted terms of service, restrict content pursuant to those terms (and not block content on “pretextual grounds”), give the content provider timely notice describing the “reasonable factual basis” as to why the content was restricted, and give the provider a “meaningful opportunity to respond.”
Platforms would be shielded from liability where a content moderation decision was made in accordance with the newly defined good faith requirements. However, the good faith liability protection would not apply in the following circumstances:
- Bad Samaritan. Platforms would not be able to seek the law’s protections when they have “acted purposefully with the conscious object to promote, solicit, or facilitate material or activity ... that the service provider knew or had reason to believe would violate Federal criminal law.”
- Federal criminal material. Platforms would lose liability protection from state criminal prosecution or state or federal civil actions if they had “actual notice” that criminal material had been posted to their site and they failed to remove, report and preserve evidence of such material.
- Judicial decisions. If a court rules, as part of a criminal prosecution or civil action, that any content posted to a platform is “defamatory under state law or unlawful in any respect,” the platform would be required to remove it.
The DOJ’s Proposal would also:
- Expose platforms to civil liability from claims brought by federal agencies. The proposal would open up platforms to civil liability claims brought by federal agencies, making it easier for administrative agencies like the Federal Trade Commission (FTC) to bring enforcement actions against platforms that choose to take down, block or restrict content online.
- Require platforms to implement content moderation policies and update their terms of service. To be able to remove content in good faith, a platform would need plainly drafted terms of service that outline the site’s content moderation practices.
The DOJ provided a cover letter explaining its proposed changes, a redline showing its proposed language and a section-by-section explanation of the proposal.
Other Efforts to Revise Section 230
Executive
At the direction of the president, the National Telecommunications and Information Administration in July filed with the FCC a petition for rule-making asking the FCC to propose regulations that would clarify the meanings of “objectionable content” and “good faith” as well as mandate certain transparency disclosures.
Legislative
Section 230 was also the topic of a recent White House forum and a House Energy & Commerce Committee hearing on “Mainstreaming Extremism: Social Media’s Role in Radicalizing America.”
While any changes to Section 230 are unlikely to pass during this legislative session, Congress could take up the issue next year, as proposed changes to Section 230’s liability shield have been getting plenty of attention on the Hill.
The following bills are aimed at reforming Section 230:
- See Something, Say Something Online Act. Sen. Joe Manchin (D-W.Va.) introduced a long-awaited bill to revamp Section 230, with Sen. John Cornyn (R-Texas) as a co-sponsor. The bill focuses on reducing major crimes (such as the illicit sale of opioids) online and would require platforms that qualify for Section 230 immunity to report to the DOJ “suspicious activity” by their users.
- Online Content Policy Modernization Act. Introduced in September by Sen. Lindsey Graham (R-S.C.), the bill would modify the scope of protection from civil liability for “good Samaritan” blocking and screening of offensive material under Section 230. Notably, the bill’s Section 230 provisions are identical to those in the bipartisan Online Freedom and Viewpoint Diversity Act.
- Online Freedom and Viewpoint Diversity Act. Introduced in early September by Sens. Roger Wicker (R-Miss.), Graham and Marsha Blackburn (R-Tenn.), the bill would narrow Section 230’s liability shield, conditioning protection on an “objectively reasonable” standard. Similar to the DOJ’s proposal, the bill proposes removing the concept of “otherwise objectionable” material and instead suggests adding categories of content, including material that promotes “terrorism” or “self-harm.”
- Behavioral Advertising Decisions Are Downgrading Services (BAD ADS) Act. Introduced in July by Sen. Josh Hawley (R-Mo.), the bill would remove Section 230 immunity from companies that display “manipulative” behavioral ads or provide data to be used for them.
- Senate Platform Accountability and Consumer Transparency (PACT) Act. Introduced by Sens. John Thune (R-S.D.) and Brian Schatz (D-Hawaii), the bipartisan legislation would amend the scope of Section 230’s liability shield.
- Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act. Introduced by Sens. Graham, Hawley, Dianne Feinstein (D-Calif.) and Richard Blumenthal (D-Conn.), this bill would create a new government commission composed of administration officials and outside experts, who would set “best practices” for removing child sexual exploitation and abuse material online. Notably, compliance with the best practices (which would include a requirement to search user-generated content and turn over a wide range of “abusive material” to the government upon demand) would be voluntary, but if a site failed to comply, it could lose some protections provided under Section 230.
We will continue to monitor developments relating to Section 230 of the Communications Decency Act.
-
Partner