Finding Credibility: The Role of Content Moderation in Promoting Journalism

As social media grows in popularity, so does its power. According to the Pew Research Center, 72% of U.S. adults used at least one social media site in 2019 (Gallo & Cho, 2021, p. 4). These platforms, where user content and social interactions are hosted and organized, are becoming problematic, especially in regards to engagement with news organizations. Journalism prides itself on credibility — the reputation of integrity in their content. Social media assists news organizations in spreading verified political, social and cultural updates. The growing concern of misinformation prompted social media companies to consider improving moderation policies to bolster the principle of credibility. For the purpose of this paper, moderation is where users, moderators or automated systems identify content to flag or remove. Considering the role these websites play in the distribution of information published by news organizations, this paper will explore the intersection of social media moderation and the spread of credible journalism. As platform moderation policy has evolved from advancing the freedom of expression to awareness around the role of algorithms and addressing their shortcomings, websites are returning to their roots and promoting credible content.

Early social media policies focused on the freedom of expression, opening opportunities for news organizations to develop credibility in new spaces. Social media platform moderation began in the mid-1990s. According to digital scholar Tarleton Gillespie (2018), lawmakers were concerned about “cyberporn,” or the posting of explicit sexual content on social media (p. 26). These grievances forced individuals in power to question the role of platforms, and whether or not they should be held responsible for the content posted on their websites. In 1996, Congress passed Section 230 as part of the Communications Decency Act (CDA). While the Supreme Court ruled the CDA unconstitutional through Reno v. American Civil Liberties Union, the decision did not affect Section 230 because it was not an issue in the case (Gillespie, 2018, p. 30). Section 230 states that social media platforms are not liable for content posted by users (Feeney & Duffield, 2020). As a result, early social media websites’ policies tended to be more open, allowing for individuals to share information from anywhere as long as it did not violate a platform’s community guidelines (which often address harassment and explicit content). The initial openness of social media allowed news organizations to publish their content through a new medium and expand their reach. Journalists could continue to grow their credibility by receiving likes and shares from users. This validation allowed news organizations to modernize by engaging closely with their audience in the digital space. Initially, Section 230 facilitated the democratization of information and allowed journalists to continue supporting their reputation of integrity via user interaction.

Even though news organizations attempted to share credible content, not all users were receptive. Journalists were not the only individuals posting content, and they had to compete with clickbait blog posts and glorified gossip. Not all content posted on social media by journalists would automatically make it into user feeds, and platforms had no incentive to push reputable content. Users acted as curators, choosing what posts to share to their networks and which to hide. Social media tools allowed “skeptical audiences,” or individuals who chose not to trust journalists, the opportunity to receive information from different sources: primary and secondary (Harper, 2010). A user base emerged that engaged with alternative news accounts not linked with official sources. While social media offered opportunities for news organizations to grow their reputation of integrity and promote their credible content, there was slippage between journalist’s goals and their reality.

In 2016, the narrative around content moderation shifted as users and journalists called for social media platforms to address the spread of misinformation and promote credible content. Posts on these websites became increasingly politicized, and the issue of moderation took on a larger role in the eyes of individuals. During the 2016 U.S. Presidential Election, the algorithms that curate content circulated falsehoods. Outlandish articles published by illegitimate sources were rewarded over substantive journalism (Gillespie, 2018, p. 202). Politically motivated Russian operatives disrupted the feeds of American users, flooding them with misinformation (Gillespie, 2018, p. 202). Journalists and individuals alike became aware of social media networks, such as Facebook, suppressing credible news sources and demanded change (Grygiel, 2019). In essence, 2016 became a litmus test for social media platforms: would they change algorithms to prioritize credible content or allow for the spread of misinformation?

Social media platforms failed to respond to calls for change to content moderation strategies, and credible journalism remained in the shadows of falsehoods. Analysts spent time picking apart the mechanisms of misinformation rather than drafting solutions (Roberts, 2019, p. 4). Companies used Section 230 to avoid “the responsibility of being custodians’’ to their platforms (Gillespie, 2018, p. 211). The demands to prioritize credibility were unaddressed due to fears of seeming one-sided while moderating falsified political content. While 2016 increased awareness of issues surrounding content moderation, no substantive and public changes were made by companies or governments to increase the spread of legitimate news information. Threats to the spread of credible content continued, hurting journalism’s place on social media platforms.

Despite the shortfalls in 2016, social media platforms made substantial changes to their content moderation policies in 2020 due to the COVID-19 pandemic, improving the visibility of credible journalism for audiences. In 2016, misinformation spread on social media was seen as only causing indirect harm because it was political (Feeney & Duffield, 2020). However, with COVID-19, information could be a factor in determining life or death. Because COVID-19 information cost more than just votes, social media companies updated their public-facing policies and became more transparent about content moderation. Twitter updated its definition of harm to include content that went against guidance from global and local public health sources. Facebook, YouTube and Instagram removed misinformation about COVID-19 (Baker et al., 2020, p. 104). TikTok added a COVID-19 information center to their platform. Companies took spreading the truth into their own hands in order to protect their users. This acted in news organizations’ favor, as it ensured their credible content landed in user feeds. The COVID-19 pandemic forced social media platforms to update content moderation policies, allowing journalists to promote their reputation of integrity and rebuild their audience.

Beyond the objectively non-political nature of COVID-19 misinformation, other factors contributed to the improvement of social media moderation that boosted the visibility of credible content on these platforms. COVID-19 content contributed to an “infodemic,” or too much information published to the point where audiences did not know what to believe (First Draft, 2020). Platforms experienced a rise in fake news that buried articles with journalist integrity. As a result, social media companies felt the need to filter through their posts and began highlighting information from official organizations and credible news outlets. This increased user interactions with journalists and the spread of information with integrity. Additionally, the COVID-19 pandemic coincided with the 2020 election, and fake political news circulated again. The added political stakes of the pandemic and the memory of misinformation in 2016 created pressure for change. Platforms announced “an unprecedented number” of alterations before the election (Lloyd et al., 2021). These changes included banning political ads and clearly labeling misinformation (Mozilla Foundation, 2021, p. 3–4). Policy changes were publicly announced and visible to users, unlike the closed-door decisions of the past. Overall, events in 2020 transformed content moderation policies and improved the visibility of credible content.

While content moderation helped increase the visibility of news organizations in 2020, there are still more ways social media platforms can improve their policies. First, due to the way social media is designed, users end up in certain niches and interacting with the same content. This results in an echo chamber, and allows users to quickly spread misinformation if it enters their circle (Flintham et al., 2018, p. 8). As a study by The University of Nottingham notes, “Fake news that supports a certain narrative often stands unchallenged within the echo chamber since evidence of its falsities would not find its way into the bubble” (Flintham et al., 2018, p. 8). Social media platforms need to alter their algorithms and allow credible sources, such as news organizations, to engage with audiences united around misinformation. Creating a mechanism for journalists to spread the truth in these circles will increase the readership of credible news and combat the consequences of false information. Social media platforms must change the way they operate and burst bubbles spreading fake news.

Furthermore, social media platforms should promote the fluidity of journalistic content by designing algorithms that push credible sources with differing perspectives into digital niches. Credibility does not absolve articles of bias, but promoting factual information with an issue framed differently will allow for exploration (Flintham et al., 2018, p. 8). Users will have the opportunity to engage with new ideas and burst their information bubble. Creating an algorithm that promotes different perspectives in social media niches will help credible sources engage with new audiences, improving readership. This will also prevent agenda-setting by social media platforms. Overall, social media platforms should allow credible alternative perspectives into different user niches.

Finally, Section 230 is under scrutiny by the United States government, and proposed changes may help news organizations push their credible content. According to the Cato Institute, Section 230 was “one of the most discussed issues” in 2020. Politicians on the left and right debated several proposals including the Stop the Censorship Act, Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act and Protect Speech Act (Feeney & Duffield, 2020). Nevertheless, Section 230 is likely to change during the Biden administration, as the president called for the repeal of the regulation. The Biden administration is expected to push for legislation that “tackles misinformation and election interference, perhaps by requiring ‘reasonable’ platform policies concerning such speech” (Feeney & Duffield, 2020). Amending Section 230 to scrutinize big tech companies will force platforms to ramp up content moderation efforts, likely allowing credible content to flood user feeds. This will help journalists build their reputation of integrity and reach audiences. Regulation changes by the government will help credible journalism thrive on social media platforms.

Social media platforms play a significant role in Americans’ lives. As more individuals find their information from these websites, it is imperative to consider the role content moderation plays in the promotion of credible posts. Over the years, company policies evolved. Beginning with the democratization of information in 1996 with Section 230, news organizations had the opportunity to build their reputation of integrity with digital audiences. However, as other non-credible sources clouded the platform, journalists faced challenges. The 2016 election hindered news organization’s ability to promote credible content, as they competed with clickbait and misinformation. However, in 2020, social media platforms reformed content moderation policies in response to the COVID-19 pandemic and the 2020 US election. Journalists can now have their credible information widely seen. While there are still changes to be made by platforms and the government, news organizations can easily promote their articles and reach new audiences. Content moderation defines visibility on social media platforms. With company policies and government regulations set to change in the next several years, journalism will have an opportunity to thrive in the digital age.

Baker, S. A., Wade, M., & Walsh, M. J. (2020). The challenges of responding to misinformation during a pandemic: content moderation and the limitations of the concept of harm. Media International Australia, 177(1), 103–107. https://doi.org/10.1177/1329878x20951301

Feeney, M., & Duffield, W. (2020, November 2). A Year of Content Moderation and Section 230. cato.org. https://www.cato.org/blog/year-content-moderation-section-230.

First Draft News. (2021, March 29). Too much information: a public guide to navigating the infodemic. First Draft. https://firstdraftnews.org/long-form-article/too-much-information/.

Flintham, M., Karner, C., Bachour, K., Creswick, H., Gupta, N., & Moran, S. (2018). Falling for Fake News. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1–10. https://doi.org/10.1145/3173574.3173950

Gallo, J. A., & Cho, C. Y. (2021). Social Media: Misinformation and Content Moderation Issues for Congress. Congressional Research Service, 1–31. https://fas.org/sgp/crs/misc/R46662.pdf.

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. Yale University Press.

Grygiel, J. (2019, August 16). Algorithm Overlords: How social media tech stifles journalism and interferes with democracy. The Milwaukee Independent. http://www.milwaukeeindependent.com/syndicated/algorithm-overlords-social-media-tech-stifles-journalism-interferes-democracy/.

Harper, R. (2010). The Social Media Revolution: Exploring the Impact on Journalism and News Media Organizations. Inquiries Journal, 2. http://www.inquiriesjournal.com/articles/202/the-social-media-revolution-exploring-the-impact-on-journalism-and-news-media-organizations.

Lloyd, J., Lambe, K., Davidson, A., & Jakobson, C. (2021, March 8). Misinformation in the 2020 US elections: a timeline of platform changes. Mozilla Foundation. https://foundation.mozilla.org/en/blog/misinformation-in-the-2020-us-elections-a-timeline-of-platform-changes/.

Mozilla Foundation. (2021, January 21). US Election 2020: platform misinformation policies tracker. Google Docs. https://docs.google.com/document/d/1-b5b_AgJhz7rAXaSpekwBwZWHx_A7iSleoOGxK0lZto/edit.

Roberts, S. T. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. Yale University Press.

northwestern medill ’24 || denver