Document Type

Article

Publication Date

3-31-2022

Publication Title

Policy & Internet

Volume

14

Issue

1

Pages

63-78

Abstract

A number of issues have emerged related to how platforms moderate and mitigate “harm.” Although platforms have recently developed more explicit policies in regard to what constitutes “hate speech” and “harmful content,” it appears that platforms often use subjective judgments of harm that specifically pertains to spectacular, physical violence—but harm takes on many shapes and complex forms. The politics of defining “harm” and “violence” within these platforms are complex and dynamic, and represent entrenched histories of how control over these definitions extends to people's perceptions of them. Via a critical discourse analysis of policy documents from three major platforms (Facebook, Twitter, and YouTube), we argue that platforms' narrow definitions of harm and violence are not just insufficient but result in these platforms engaging in a form of symbolic violence. Moreover, the platforms position harm as a floating signifier, imposing conceptions of not just what violence is and how it manifests, but who it impacts. Rather than changing the mechanisms of their design that enable harm, the platforms reconfigure intentionality and causality to try to stop users from being “harmful,” which, ironically, perpetuates harm. We provide a number of suggestions, namely a restorative justice-focused approach, in addressing platform harm.

Comments

© 2022 The Authors. Policy & Internet published by Wiley Periodicals LLC on behalf of Policy Studies Organization.

https://doi.org/10.1002/poi3.290

Creative Commons License

Creative Commons Attribution-No Derivative Works 4.0 International License
This work is licensed under a Creative Commons Attribution-No Derivative Works 4.0 International License.

Included in

Communication Commons

Share

COinS