- PART II: SPEECH ON THE INTERNET AND SECTION 230
PART II: SPEECH ON THE INTERNET AND SECTION 230
There exists, understandably, widespread confusion about what exactly is protected or protectable speech when it comes to online platforms. Articles and debates on the nature of free speech on the internet—and whether the internet should be regulated—have existed since the birth of the internet as a medium for communications.73 As early as 1996, “cyberspace activist” John Perry Barlow poetically declared:
67 Wilkinson & Berry, supra note 44, at 224.
68 Id.
69 Id. at 225.
70 See Suciu, supra note 25; see also YouTube User Statistics 2022, GLOBALMEDIAINSIGHT (Apr. 18, 2022), https://www.globalmediainsight.com/blog/youtube-users-statistics/ (indicating that YouTube has 2.6 billion unique users generating “billions of views” on the platform, and that YouTube is the “second-most trafficked website after Google”).
71 See Abby Ohlheiser, LGBT Creators Wonder Whether YouTube Really Supports Them or Just Pretends To During Pride Month, WASH.POST (June 6, 2019), https://www.washingtonpost.com/technology/2019/06/06/lgbt-creators-wonder-whether-youtube-really-supports-them-or-justpretends-during-pride-month/.
72 See generally Divino Group LLC et al. v. Google LLC et al., 2021 WL 51715 (N.D. Cal. 2021).
73 See, e.g., James J. Black, Free Speech & The Internet: The Inevitable Move Toward Government Regulation, 4 RICHMOND J. L. & TECH. 1 (1997) (suggesting that activity on the “Net” would fall under regulations according to where the speech/activity originated and discussing differences in free speech regulations according to geographic location); see also Helen Roberts, Research Paper 35 (1995-96): Can the Internet be Regulated?, AUSTL.PARLIAMENT HOUSE, https://www.aph.gov.au/about_parliament/parliamentary_departments/parliamentary_library/pubs/rp/rp9596/96rp35 (last visited Feb. 23, 2022).
Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.
We have no elected government, nor are we likely to have one, so I address you with no greater authority than that with which liberty itself always speaks. I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear. 74
Although Barlow—someone with “no technical expertise” but who held “a reputation as a prophet of new technology”75—and his sympathizers may have had sweeping ideals surrounding what they hoped was a cyberlibertarian future, those ideals have not truly come to pass. They have, however, been ingrained in the debate over whether speech on the internet is “free” and whether the internet is, or ought to be, considered separate from the “real world”76 and thus regulated or deregulated in a unique manner.
A. The Necessity of, and Unintended Consequences from, Private Moderation
1. “Free Speech” as a Moving Target
Internet speech regulation and moderation entered the purview of the Supreme Court early on, and the Court has a history of selectively eschewing regulation of internet-based speech. In 1997 in Reno v. ACLU, the Court ruled that restrictions on the “display” and “transmission” of what was deemed “indecent” communications online violated the First Amendment, lending credence to the idea that internet speech is truly free.77 However, five years later in 2002—as the internet, and access to it, was broadening exponentially—the Court grappled with definitions of protected expression and obscenity. In Ashcroft v. Free Speech
74 John Perry Barlow, A Declaration of the Independence of Cyberspace (Feb. 8, 1996), ELEC. FRONTIER FOUND.: JOHN PERRY BARLOW LIBR., https://www.eff.org/cyberspace-independence (last visited Apr. 28, 2022).
75 Michael Buozis, Making Common Sense of Cyberlibertarian Ideology: The Journalistic Consecration of John Perry Barlow, TAYLOR & FRANCIS ONLINE (July 7, 2021), https://www.tandfonline.com/doi/abs/10.1080/24701475.2021.1943994.
76 See Katharine Gelber & Susan J. Brison, *Digital Dualism and the “Speech as Thought” Paradox, *in FREE SPEECH IN THE DIGITAL AGE 12, 17 (Susan J. Brison & Katharine Gelber eds., 2019) (arguing that “[t]hose who claim a special sphere of speech online misconstrue the nature of speech itself and use unviable arguments for its protection” and that cyberspace should not be distinguished from the “real world”).
77 Reno v. ACLU, 521 U.S. 844 (1997); see also Robert Corn-Revere, Internet & First Amendment Overview, FREEDOM FORUM INST. (Nov. 20, 2002), https://www.freedomforum institute.org/first-amendment-center/topics/freedom-of-speech-2/internet-first-amendment/.
Coalition, the Court found that “virtual” child pornography not involving actual children was “protected expression” under the First Amendment. 78 Later that year in Ashcroft v. ACLU, the Court upheld the Child Online Protection Act (COPA)79 as a means of regulating expression, but remanded to the lower court for a determination as to what constituted “obscenity law” in the modern age.80 Since that time the landscape, and very nature, of the internet has evolved in such a manner that regulation has become increasingly necessary. The First Amendment question has shifted from a conversation around obscenity and protection of children to one of threatening (or, “true threat”)81 language and spread of misinformation.82 As authors on Bloomberg put it in June of 2021, “the debate is over how, not whether, to filter what’s said online.”83
2. Shifting Sands and the Social Media Debate
In the twenty years since Ashcroft v. ACLU, the makeup and content of the internet has become virtually unrecognizable compared to that shared on earlier platforms. When Ashcroft was decided, accessing information online was a markedly slower task 84 and the percentage of people using the internet was far smaller. According to Pew Research Center, 82% of American adults were on the internet in 2015.85 That percentage, in 2000, was 50%,86 but with a significant portion of those users being between the ages of 18-29.87 The draw for younger adults was not to locate information or even share news; the internet in the early 2000s was primarily a source of entertainment and of limited connectivity with a
78 Ashcroft v. Free Speech Coalition, 535 U.S. 234 (2002); see also Corn-Revere, supra note 77.
79 Child Online Protection Act, 47 U.S.C. § 231 (1998); see also ACLU v. Gonzales, 478 F. Supp. 2d 775 (E.D. Pa. 2007) (Enjoining enforcement of COPA on Fourth and Fifth Amendment grounds); Mukasey v. ACLU, 555 U.S. 1137 (2009) (refusing certiorari and in so doing affirming the Gonzales decision).
80 Ashcroft v. ACLU, 535 U.S. 564 (2002); see also Corn-Revere, supra note 77.
81 A “true threat” in First Amendment jurisprudence is “a statement that is meant to frighten or intimidate one or more specified persons into believing that they will be seriously harmed by the speaker or someone acting at the speaker’s behest” and involve a “serious expression of an intent to commit an act of unlawful violence to a particular individual or group of individuals.” Kevin Francis O’Neill & David L. Hudson, Jr., True Threats, THE FIRST AMENDMENT ENCYCLOPEDIA, https://www.mtsu.edu/first-amendment/article/1025/true-threats (June 2017) (citing Virginia v. Black, 538 U.S. 343 (2003)).
82 See Megan R. Murphy, Comment, Context, Content, Intent: Social Media’s Role in True Threat Prosecutions, 168 U.PENN. L. REV. 733 (2020).
83 Sarah Frier, Naomi Nix, & Sarah Kopit, Why Free Speech on the Internet Isn’t Free for All, BLOOMBERG TECH: QUICK TAKE (June 19, 2021), https://www.bloomberg.com/news/articles/ 2021-06-19/why-free-speech-on-the-internet-isn-t-free-for-all-quicktake.
84 In 2007, average internet access speed was 3.67 Mbps; in 2017 it was 18.75 Mbps. S. O’Dea, Average Internet Connection Speed in the United States from 2007-2017 (in Mbps), by Quarter, STATISTA (July 22, 2020).
85 Andrew Perrin & Maeve Duggan, Americans’ Internet Access: 2000-2015, PEW RESEARCH CTR. https://www.pewresearch.org/internet/2015/06/26/americans-internet-access-2000-2015/ (June 26, 2015).
86 Id.
87 Id.
pre-existing group of friends.88 A staggering 92% of that same age bracket were active users in 2015.89 Among senior citizens, a demographic most often targeted by misinformation and “fake news” efforts,90 the percentage of internet users spiked from 14% in 2000 to 58% in 2015, 91 with the majority becoming active online after 2012.92
Why is this important? Simply put, the internet is being used less as a place of sporadic connectivity and more as an intrinsic part of people’s everyday lives. Society relies on the internet for news, communication, creative content, audio and video streaming, and much, much more—including use of, and access to, the phenomenon of “social media.” 93 Social media is a development that evolved from a mere profile-uploading service in 1997 to the platforms we think of today:94 Facebook, Reddit, Twitter, Instagram, Pinterest, Snapchat, and TikTok, primarily. 95 Although they are privately-owned entities, the fact that they frequently host content which is available for widespread public consumption has led to broad confusion about the platforms’ rights to censor that content. 96 YouTube is a privately-owned content provider that hosts user-created content and encourages content sharing. It is by definition “social media,” and subject to the same confusion that plagues other hosts of user speech.
3. YouTube, Despite Its Publicly Shareable Content, is Not a Public Site
Most of the world would consider YouTube to be a “public” site; however, it, as a social media platform (and the most popular one as of 2021), 97 is anything but. Because social media platforms are not considered public forums,98 they are largely free to censor user content, with the exception that government accounts on those
88 For a description of early- to mid-2000s websites and their purposes, see Clinton Nguyen, These Websites Defined the Early 2000s – Here’s Where They Are Now, BUSINESSINSIDER (Oct. 5, 2016), https://www.businessinsider.com/what-happened-to-early-2000s-websites-2016-10.
89 Perrin & Duggan, supra note 85.
90 See, e.g., Nadia M. Brashier & Daniel L. Schacter, Aging in an Era of Fake News, 29(3) CURR. DIRECTIONS PSYCHOLOGICAL SCI. 316 (2020).
91 Perrin & Duggan, supra note 85.
92 Id.
93 Social media is “web-based communication tools that enable people to interact with each other by sharing and consuming information.” Daniel Nations, What is Social Media?, LIFEWIRE (Jan. 26, 2021), https://www.lifewire.com/what-is-social-media-explaining-the-big-trend-3486616.
94 The Evolution of Social Media: How Did it Begin, and Where Could it Go Next?, ARTICLES: MARYVILLE UNIV., https://online.maryville.edu/blog/evolution-social-media/ (last visited Feb. 25, 2022) (referencing the site “Six Degrees”).
95 Id. (listing major social media platforms as of 2022).
96 See, e.g., Natalie Strossen, Transcript, Does the First Amendment Apply to Social Media Companies?, TALKSONLAW, https://www.talksonlaw.com/briefs/does-the-first-amendmentrequire-social-media-platforms-to-grant-access-to-all-users (last visited Feb. 25, 2022).
97 See Salvador Rodriguez, YouTube is Social Media’s Big Winner During the Pandemic, CNBC (Apr. 7, 2021), https://www.cnbc.com/2021/04/07/youtube-is-social-medias-big-winner-duringthe-pandemic.html.
98 Public Forum, supra note 30.
platforms may not silence users who are responding to the government’s speech. 99 Despite this, the internet and “social media in particular”100 have become critical for the expression of protected speech. 101 Jack Dorsey, the creator of Twitter, stated in 2018 that he believes Twitter should be a “public square” where “activists, marginalized communities, whistleblowers, journalists, governments and the most influential people in the world” have an “open and free exchange” of ideas.102 Recently, in April of 2022, billionaire Elon Musk of Tesla and SpaceX offered to purchase Twitter for $44 billion, stating that “free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated.”103 Musk’s offer was accepted,104 but critics have already levied harsh opinions against Musk and his Barlow-esque dream of a cyberlibertarian platform.105 Those critics point out that a lack of moderation leads not only to a free-for-all arena for hate speech and bigotry,106 but also potential legal implications if Musk intends not to moderate Twitter’s European users. 107
Regardless of Dorsey and Musk’s idealistic visions, only the government—not private platforms—can affirmatively create new public spaces. 108 The government has not done so in the context of social media generally; the exception lies in (correctly) labeling the official account pages of government officials as being truly public. 109
This affirmative lack of government action in social media was the crux of the reason that Judge DeMarchi opted to dismiss the plaintiffs’ complaint in Divino Group. The plaintiffs’ first claim, for violation of their First Amendment rights under 42 U.S.C. § 1983, 110 failed because to state a claim under § 1983, plaintiffs “must plead facts showing that a person acting under color of state law proximately
99 See CONG. RSCH.SERV., LSB10141, UPDATE:SIDEWALKS,STREETS, AND TWEETS: IS TWITTER A PUBLIC FORUM? (2019).
100 Id.
101 Id.
102 Jack Dorsey (@jack), TWITTER (Sept. 5, 2018, 1:56 pm), https://twitter.com/jack/status/ 1037399119810232321.
103 Bobby Allyn, Elon Musk Bought Twitter. Here’s What He Says He’ll Do Next, NPR (Apr. 25, 2022), https://www.npr.org/2022/04/25/1094671225/elon-musk-bought-twitter-plans.
104 Id.
105 See, e.g., Marc Ginsberg, Elon Musk’s Twitter ‘Free Speech’ Mirage, THE HILL (Apr. 29, 2022), https://thehill.com/opinion/technology/3471557-elon-musks-twitter-free-speech-mirage/; Mutale Nkonde, Elon Musk Says He Wants Free Speech on Twitter. But for Whom?, SLATE: FUTURETENSE (Apr. 27, 2022), https://slate.com/technology/2022/04/elon-musk-free-speechtwitter-for-whom.html; Natasha Lomas, Will Elon Musk Put Twitter on a Collision Course with Global Speech Regulators?: ‘Free Speech Absolutism’ Versus Digital Regulation in Europe and Beyond…, TECHCRUNCH (Apr. 26, 2022), https://techcrunch.com/2022/04/26/elon-musk-freespeech-regulation/.
106 Ginsberg, supra note 105.
107 Lomas, supra note 105.
108 CONG. RSCH.SERV., supra note 99.
109 See, e.g., E.A. Gjelten, Can Government Officials Block Critics on Social Media?, LAWYERS.COM (Apr. 13, 2021), https://www.lawyers.com/legal-info/criminal/can-governmentofficials-block-critics-social-media.html.
110 Civil Action for Deprivation of Rights, 42 U.S.C. § 1983.
caused a violation of their constitutional or other federal rights.”111 Here, the plaintiffs acknowledged that Google et al. were private entities, but attempted to argue that the defendants “should be considered state actors” because the defendants “designated” YouTube as a public forum for free expression.112 Unfortunately for the plaintiffs, no person was acting under color of state law, nor did any government authority designate the platform a “public forum.”
The “public forum” label is one that platforms and courts tend to eschew due to the substantial inferences associated with it. In 2018, Mark Zuckerberg of Facebook (now “Meta”)113 avoided answering Senator Ted Cruz’s repeated questions relating to whether Facebook was a “neutral public forum.”114 Similarly, in Prager University v. Google LLC (“Prager III”), the Ninth Circuit overtly and directly stated that YouTube is not a public forum.115 This seems counterintuitive considering the growing importance of the internet as a means of communication, notably among “digital natives” (“[c]hildren and young people born into and raised in a digital world (post-1980)”), 116 but the reality is that platforms can censor as they wish.117
The internet’s role in public debate was particularly visible during both the 2020 United States election cycle and the COVID-19 pandemic.118 Fake news media sources and “echo chambers”10 proved extremely problematic,119 leading to
111 Order Granting Motion to Dismiss, Divino Group et al. v. Google LLC et al., no. 19-cv-04749- VKD, at 4.
112 Id.
113 See Mark Zuckerberg, Founder’s Letter, 2021, META (Oct. 28, 2021), https://about.fb.com/news/2021/10/founders-letter/, for information on the change to “Meta;” see also Press Release, Facebook.com, Introducing Meta: A Social Technology Company (Oct. 28, 2021) (discussing Facebook’s change in branding and vision for the future).
114 See Stephen Loiaconi, Zuckerberg Insists Facebook is ‘Platform for All Ideas,’ but Republicans Disagree, WJLA: ABC NEWS (Apr. 12, 2018), https://wjla.com/news/nationworld/zuckerberg-insists-facebook-is-platform-for-all-ideas-but-republicans-disagree.
115 Prager Univ. v. Google LLC, 951 F.3d 991, 995 (9th Cir. 2020).
116 Digital Natives, AM. LIBR. ASS’N: LIBRARY OF THE FUTURE, https://www.ala.org/tools/future/ trends/digitalnatives (last visited Apr. 29, 2022).
117 See CONG. RSCH.SERV., supra note 99.
118 See, e.g., Alessandro Gabbiadini et al., Together Apart: The Mitigating Role of Digital Communication Technologies on Negative Affect During the COVID-19 Outbreak in Italy, 11 FRONTIERS IN PSYCHOL. 1 (ECOLLECTION 2020) (2020); see also Adrian Wong et al., The Use of Social Media and Online Communications in Times of Pandemic COVID-19, 22(3) J. INTENSIVE CARE SOC’Y 255 (2020); Davey Alba & Sheera Frenkel, From Voter Fraud to Vaccine Lies: Misinformation Peddlers Shift Gears, N.Y. TIMES (Dec. 16, 2020), https://www.nytimes.com/2020/12/16/technology/from-voter-fraud-to-vaccine-lies-misinformation-peddlers-shift-gears.html (last updated Jan. 7, 2021) (discussing the spread of “false vaccine narratives” by right-wing figures in an attempt to “maintain attention and influence” after the 2020 election cycle).
119 See Matteo Cinelli et al., The Echo Chamber Effect on Social Media, 118(9) PROCEEDINGS NAT’L ACAD.SCI. 1 (2021).
120 See, e.g., Ingrid Hsieh-Yee, Can We Trust Social Media?, 25(1-2) INTERNET REF.SERVS. Q. 9 (2021); Mollie A. Ruben, et al., Is Technology Enhancing or Hindering Interpersonal Communication? A Framework and Preliminary Results to Examine the Relationship Between Technology Use and Nonverbal Decoding Sill, 11 FRONTIERS IN PSYCHOL. (ECOLLECTION 2020) (2021).
politically-based arguments over whether platforms were unfairly favoring a particular viewpoint in the wake of profound tides of misinformation. 121
But rather than treat platforms as truly public, many (successfully) called for the platforms to create and enact policies purporting to fight that misinformation.122 Even Reddit—whose self-proclaimed policy is to allow “open and authentic” debate—now selectively bans and moderates content on its platform.123 Similarly, many platforms only selectively censor content,124 with the larger platforms opting to do so by algorithm.125
Without critical eyes on the datasets that such algorithms use for moderation, “benign” content—such as Sal Bardo’s—is at risk for inappropriate or unintended moderation.126 While a deeper discussion of the unintended and evidently biased results of algorithmic moderation is beyond the scope of this paper, it is notable that algorithmic bias is a topic of debate for both regulatory authorities and technology content providers.127 Whether manual or algorithmic, any undue or
121 See, e.g., Jessica Guynn, ‘You’re the Ultimate Editor,’ Twitter’s Jack Dorsey and Facebook’s Mark Zuckerberg Accused of Censoring Conservatives, USA TODAY (Nov. 17, 2020), https://www.usatoday.com/story/tech/2020/11/17/facebook-twitter-dorsey-zuckerberg-donaldtrump-conservative-bias-antitrust/6317585002/; Vera Bergengruen, Under Scrutiny, Facebook and Twitter Face Their Biggest Test on Election Day, TIME (Nov. 3, 2020), https://time.com/5906854/facebook-twitter-election-day/; Taberez Ahmed Neyazi et al., Misinformation Concerns and Online News Participation Among Internet Users in India, 7 SOC. MEDIA & SOC’Y 1 (2021); Sarah Kreps, The Role of Technology in Online Misinformation, BROOKINGS:FOREIGN POL’Y (June 2020); Denise-Marie Ordway, Fake News and the Spread of Misinformation: A Research Roundup, JOURNALIST’S RESOURCE (Sept. 1, 2017), https://journalistsresource.org/politics-and-government/fake-news-conspiracy-theories-journalismresearch/.
122 See, e.g., COMMUNITY GUIDELINES, supra note 18; COVID-19 Misleading Information Policy, TWITTER.COM, https://help.twitter.com/en/rules-and-policies/medical-misinformation-policy (last visited Oct. 16, 2021); Nick Clegg, Combating COVID-19 Misinformation Across Our Apps, FACEBOOK.COM (Mar. 25, 2020), https://about.fb.com/news/2020/03/combating-covid-19- misinformation/.
123 See Steve Huffman (@spez), REDDIT (Aug. 25, 2021), https://www.reddit.com/r/announcements/comments/pbmy5y/debate_dissent_and_protest_on_reddit/ (stating that “Dissent is a part of Reddit and the foundation of democracy. Reddit is a place for open and authentic discussion and debate. This includes conversations that question or disagree with popular consensus. This includes conversations that criticize those that disagree with the majority opinion. This includes protests that criticize or object to our decisions on which communities to ban from the platform”).
124 See Ashwini Ashokkumar et al., Censoring Political Opposition Online: Who Does It and Why, 91 J. EXPERIMENTAL PSYCH. 104031 (2020); for a discussion on selective content moderation, see also Sanaz Talaifar et al., Political Censorship in the Digital Age, SOC’Y PERSONALITY & SOC. PSYCH.: CHARACTER & CONTEXT (Oct. 28, 2020), https://www.spsp.org/news-center/blog/talaifarashokkumar-swann-political-censorship.
125 See James Vincent, Facebook is Now Using AI to Sort Content for Quicker Moderation, THE VERGE (Nov. 13, 2020, 9:00 am), https://www.theverge.com/2020/11/13/21562596/facebook-aimoderation; Francesca Duchi, Problematic Algorithms: YouTube’s Censorship and Demonetization Problem, MEDIUM.COM (Apr. 16, 2019), https://www.theverge.com/2020/11/13/21562596/facebook-ai-moderation.
126 See generally Jennifer Cobbe, Algorithmic Censorship by Social Platforms: Power and Resistance, 34 PHILOSOPHY & TECH. 739-66 (2021).
127 See, e.g., Alice Xiang, Reconciling Legal and Technical Approaches to Algorithmic Bias, 88(3) TENN. L. REV. 649 (2021).
discriminatory moderation currently goes without consequence, as evidenced by Divino Group. The social media platforms performing the moderation are heavily relying on the protections afforded to them by Section 230 of the CDA.
B. Section 230’s Applicability in Divino Group
One part of the Divino Group dismissal was based on Judge DeMarchi’s assessment of the plaintiffs’ claim that, by leaning on Section 230 of the CDA, the defendants’ “private conduct bec[ame] state action ‘endorsed’ by the federal government.”128 Judge DeMarchi relied on Prager III in determining that YouTube’s “hosting of speech on a private platform is not a traditional and exclusive government function” and that the Supreme Court has “consistently declined to find that private entities engage in state action, except in limited circumstances.”129 Judge DeMarchi stated that the standard is to “start with the presumption that conduct by private actors is not state action. [Plaintiff] bears the burden of establishing that Defendants were state actors.”130
However, Judge DeMarchi did not directly address the protections granted by Section 230, other than to say that Section 230 was designed “to keep government interference in [internet communication] to a minimum.”131 In so doing, Judge DeMarchi followed a long trend of selective application and misapplication of Section 230 protections. 132 Such misapplication is understandable given the law’s tenuous relationship with technology, but no longer acceptable considering the current socially-focused state of the internet—and the fact that the CDA was enacted in 1996, twenty-five years earlier than the Divino Group (and other related) decisions.
128 Order Granting Motion to Dismiss, Divino Group et al. v. Google LLC et al., no. 19-cv-04749- VKD, at *4.
129 Id. at *4 (citing Prager Univ. v. Google LLC, 951 F.3d 991, 997-99 (9th Cir. 2020)).
130 Id. at *15 (citing Florer v. Congregation Pidyon Shevuyim, N.A., 639 F.3d 916, 922 (9th Cir. 2011)).
131 Id. at *17 (citing Batzel v. Smith, 333 F.3d 1018, 1027 (9th Cir. 2003)).
132 For a discussion of the history of Section 230 application/misapplication, see, e.g., Neil Fried, Why Section 230 Isn’t Really a Good Samaritan Provision, DIGITALFRONTIERS ADVOCACY: BLOGS & OPEDS (Mar. 24, 2021), https://digitalfrontiersadvocacy.com/blogs-and-op-eds/f/whysection-230-isnt-really-a-good-samaritan-provision (“Courts have concluded [the language of § 230(c)(1)] ‘creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.’ Consequently, judges have ruled that platforms cannot be held culpable for negligently, recklessly, or knowingly facilitating terrorism, harassment, sexual disparagement, non-consensual dissemination of intimate photos, housing discrimination, distribution of child sexual abuse materials, and other unlawful conduct by their users. Absent that potential liability, platforms are less likely to moderate content, not more.”); see also Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 HARV. L. REV. 1598, 1608 (2018) (“courts have grappled with [the paradox in applications of § 230] and occasionally broken with the expansive interpretation of the Good Samaritan provision to find a lack of § 230 immunity”).
C. The Intent of Section 230 and What Needs to Change
Section 230 has an astonishing amount of deference afforded to it by courts and tech moguls alike, often due to their misunderstanding of the intent behind the legislation. 133 Just recently, the Supreme Court denied a request to “clarify the meaning” of the law, a request made because so often the people bringing complaints against it fail to ascertain the purpose of the writing.134 Justice Thomas has opined that lower courts wrongly read more expansive protections than the Act was intended for,135 while Presidents Trump and Biden have both espoused distrust of the law and argued for its removal or revision (each in different ways).136
Just what was the intent behind Section 230? Section 230 is part of the Communications Decency Act (“CDA”), a piece of legislation that stemmed from the general idea that Congress should protect internet users—particularly children—from accessing unwanted materials such as pornography on the newlyburgeoning World Wide Web.137 The CDA was Senator James Exon’s “battle” against pornographers, those he would refer to as “barbarians” at the digital gate, luring children in, causing the internet to become a “red light district.”138 The House thoroughly and hotly debated the CDA’s constitutionality; Exon’s language was so far overreaching that even Speaker of the House Newt Gingrich opposed it. 139 Gingrich stated that Exon’s proposed limitations on access were “clearly a violation of free speech and … the right of adults to communicate with each other.”140
Amidst the debate over the CDA, a 1995 case, Stratton Oakmont, Inc. v. Prodigy Servs. Co. [hereinafter Prodigy] was brought to the courts. 141 In Prodigy, a New York state court found an internet platform, Prodigy, liable for defamation because a Prodigy user had claimed that a bank had committed securities fraud; that
133 See, e.g., Matt Schruers, Myths and Facts about Section 230, DISRUPTIVE COMPETITION PROJ. [PROJECT DISCO] (Oct. 16, 2019), https://www.project-disco.org/competition/101619-myths-andfacts-about-section-230/ (illustrating the widespread ideas and misconceptions about the law alongside judicial precedent involving the law).
134 See Alan Z. Rozenshtein, Section 230 and the Supreme Court: Is Too Late Worse Than Never?, LAWFARE (Oct. 20, 2020), https://www.lawfareblog.com/section-230-and-supreme-court-is-toolate-worse-than-never.
135 Malwarebytes, Inc. v. Enigma Software Grp. USA, LLC, 141 S. Ct. 13, 13 (2020); see also Danielle Keats Citron & Benjamin Wittes, The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity, 86 FORDHAM L. REV. 401 (2017).
136 See CONG. RSCH.SERV., UPDATE:SECTION 230 AND THE EXECUTIVE ORDER ON PREVENTING ONLINE CENSORSHIP, LSB10484 (Oct. 16, 2020), for then-President Trump’s most recent Executive Order on § 230. See Betsy Klein, White House Reviewing Section 230 Amid Efforts to Push Social Media Giants to Crack Down on Misinformation, CNN (July 20, 2021), https://www.cnn.com/2021/07/20/politics/white-house-section-230-facebook/index.html, for information on President Biden’s initial attempts to change the law
137 141 Cong. Rec. H8460 (1995), ARNOLD & PORTER LLP LEGISLATIVE HISTORY:P.L. 104-104 at 1.
138 141 Cong. Rec. S1953 (daily ed. Feb. 1, 1995).
139 Id.
140 See Robert Cannon, The Legislative History of Senator Exon’s Communications Decency Act: Regulating Barbarians on the Information Superhighway, 49 FED. COMM. L. J. 51 (1996).
141 Stratton Oakmont, Inc. v. Prodigy Servs. Co., 1995 WL 323710 (N.Y. Super. Ct. May 24, 1995).
is to say, the court decided that Prodigy had the same “publisher” liabilities as a traditional newspaper or other published source in acting as the “speaker” responsible for third-party content.142 This holding differed from that of an earlier New York case, Cubby, Inc. v. CompuServe, Inc., in which CompuServe was found not to be a “publisher” of online content.143 The Prodigy court distinguished its case by stating that Prodigy, unlike CompuServe, had adopted content standards that likened it enough to a traditional publisher that similar liabilities should apply.144
In May of 1995, during the debate surrounding the CDA, Prodigy was decided. Immediately following that decision, two Congressmen who had some insight into technologies, Representatives Chris Cox and Ron Wyden, managed to realize what effect labeling online platforms as “publishers” would have on the growth of the internet and tied that into an amendment to the CDA.145 The Cox-Wyden amendment, titled the “Internet Freedom and Family Empowerment Act,”146 was introduced in June of 1995 as House Bill 1555 (104th Cong.).147 Cox stated that their bill would “protect computer Good Samaritans, online service providers, anyone who provides a front end to the Internet … who takes steps to screen indecency;”148 their bill would protect those entities from liability.149 Cox and Wyden thus introduced the language that would become Section 230. It was this language that convinced the House to pass a version of the CDA.150
The language of Section 230(c)(1) reads: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”151 This language has been deemed “the twenty-six words that created the Internet.”152 It enabled the free exchange of information online while protecting the hosts of that information from liability for speech that they simply could not logistically or feasibly control; in short, “Section 230 allowed companies such as Prodigy to determine what moderation practices and policies best serve their users, without being exposed to massive potential liability.” 153 Additionally, Section 230(c)(2) provides immunity for platforms that remove or restrict content that they consider “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable,
142 Id. at 8-9.
143 Cubby, Inc. v. CompuServe, Inc., 766 F. Supp. 135 (S.D.N.Y. 1991).
144 Stratton Oakmont, Inc. v. Prodigy Servs. Co., 1995 WL 323710 (N.Y. Super. Ct. May 24, 1995).
145 Jeff Kosseff, What’s in a Name? Quite a Bit, if You’re Talking About Section 230, LAWFARE BLOG, https://www.lawfareblog.com/whats-name-quite-bit-if-youre-talking-about-section-230 (last visited Nov. 13, 2021).
146 Id.
147 H.R. 1555, 104th Cong. (1995).
148 141 CONG. REC. H8470 (daily ed. Aug. 4, 1995) (statement of Rep. Christopher Cox).
149 Id.
150 Cannon, supra note 140.
151 47 U.S.C. § 230(c)(1).
152 JEFF KOSSEFF, THE TWENTY-SIX WORDS THAT CREATED THE INTERNET (1st ed. 2019).
153 See Jeff Kosseff & Eric Goldman, Correcting the Record on Section 230’s Legislative History, TECH. & MKTNG. L. BLOG (Aug. 1, 2019), https://blog.ericgoldman.org/archives/2019/08/correcting-the-record-on-section-230s-legislative-history-guest-blog-post.htm.
whether or not such material is constitutionally protected.”154 This is the language that platforms rely on in moderating, censoring, and demonetizing.
The House version of the CDA which contained Section 230, and the earlier Senate version, each became part of the Telecommunications Act of 1996 (“TCA”). 155 A year later in Reno v. ACLU, the Supreme Court struck down the provisions of the TCA that made up the CDA—all except Section 230. 156 Thus Exon’s battle against pornography generally disappeared from the TCA, and Section 230 now stands alone, with no context to clarify its meaning.
Section 230 was, and is, an incredibly important piece of legislation. In some ways it continues to serve its purpose admirably even after twenty-five years. However, our digital universe has changed. Social media like YouTube (not yet imagined in 1996) is a vital source of visibility, connectivity, and income for marginalized groups of creators such as those in the LGBTQ+ community. Allowing Section 230 to fully protect platforms from liability for inappropriate removal or flagging of income-bearing (and not otherwise violent, lewd, et cetera) content made by those creators, who rely on the platform for both exposure and income, is objectively a misapplication of the statute. The solution lies in a simple amendment.
Table of Contents
- INTRODUCTION
- PART I: THE PATH TO DIVINO GROUP AND THE ROADBLOCKS FOLLOWING
- PART II: SPEECH ON THE INTERNET AND SECTION 230
- PART III: REIMAGINING 230 AND RECOURSE FOR CREATORS
- CONCLUSION