"How Do You Quantify How Racist Something Is?": Color-Blind Moderation in Decentralized Governance Journal Article uri icon

Overview

abstract

  • Volunteer moderators serve as gatekeepers for problematic content, such as racism and other forms of hate speech, on digital platforms. Prior studies have reported volunteer moderators' diverse roles in different governance models, highlighting the tensions between moderators and other stakeholders (e.g., administrative teams and users). Building upon prior research, this paper focuses on how volunteer moderators moderate racist content and how a platform's governance influences these practices. To understand how moderators deal with racist content, we conducted in-depth interviews with 13 moderators from city subreddits on Reddit. We found that moderators heavily relied on AutoMod to regulate racist content and racist user accounts. However, content that was crafted through covert racism and "color-blind'' racial frames was not addressed well. We attributed these challenges in moderating racist content to (1) moderators' concerns of power corruption, (2) arbitrary moderator team structures, and (3) evolving forms of covert racism. Our results demonstrate that decentralized governance on Reddit could not support local efforts to regulate color-blind racism. Finally, we discuss the conceptual and practical ways to disrupt color-blind moderation.

publication date

  • September 28, 2023

has restriction

  • hybrid

Date in CU Experts

  • January 28, 2024 12:26 PM

Full Author List

  • Wu Q; Semaan B

author count

  • 2

Other Profiles

Electronic International Standard Serial Number (EISSN)

  • 2573-0142

Additional Document Info

start page

  • 1

end page

  • 27

volume

  • 7

issue

  • CSCW2