This report was produced at the request of Women, Action, and the Media (WAM!). Aside from Jaclyn Friedman, founder and former executive director of WAM!, the authors of this document are academics from the filds of computational social science, anthropology, sociology, network science, and computer science. The academic authors conducted this analysis outside their academic institutions in an unpaid, volunteer capacity.
This report was reviewed by fie academic reviewers in a double-blind, revise-and-resubmit peer review process chaired by Zeynep Tufekci, Assistant Professor at the University of North Carolina, Chapel Hill. The authors are deeply grateful for the detailed feedback and high standards of these reviewers, whose efforts have led to a much clearer, stronger report.
Summary of findings
When people experience harassment online, from individual threats or invective to coordinated campaigns of harassment, they have the option to report the harassers and content to the platform where the harassment has occurred. Platforms then evaluate harassment reports against terms of use and other policies to decide whether to remove content or take action against the alleged harasser—or not. On Twitter, harassing accounts can be deleted entirely, suspended (with content made unavailable pending appeal or specifi changes), or sent a warning. Some platforms, including Twitter and YouTube, grant “authorized reporters” or “trusted flggers” special privileges to identify and report inappropriate content on behalf of others.
In November 2014, Twitter granted Women, Action, and the Media (WAM!) this authorized reporter status. From November 6–26 2014, WAM! took in reports of Twitter-based harassment, assessed them, and escalated reports as necessary to Twitter for special attention. WAM! used a special intake form to collect data and promised publicly to publish what it learned from the data it collected. In three weeks, WAM! reviewers assessed 811 incoming reports of harassment and escalated 161 reports to Twitter, ultimately seeing Twitter carry out 70 account suspensions, 18 warnings, and one deleted account. This document presents fidings from this three-week project; it draws on both quantitative and qualitative methods.
Findings focus on the people reporting and receiving harassment, the kinds of harassment that were reported, Twitter’s response to harassment reports, the process of reviewing harassment reports, and challenges for harassment reporting processes.
This report was reviewed by fie academic reviewers in a double-blind, revise-and-resubmit peer review process chaired by Zeynep Tufekci, Assistant Professor at the University of North Carolina, Chapel Hill. The authors are deeply grateful for the detailed feedback and high standards of these reviewers, whose efforts have led to a much clearer, stronger report.
Summary of findings
When people experience harassment online, from individual threats or invective to coordinated campaigns of harassment, they have the option to report the harassers and content to the platform where the harassment has occurred. Platforms then evaluate harassment reports against terms of use and other policies to decide whether to remove content or take action against the alleged harasser—or not. On Twitter, harassing accounts can be deleted entirely, suspended (with content made unavailable pending appeal or specifi changes), or sent a warning. Some platforms, including Twitter and YouTube, grant “authorized reporters” or “trusted flggers” special privileges to identify and report inappropriate content on behalf of others.
In November 2014, Twitter granted Women, Action, and the Media (WAM!) this authorized reporter status. From November 6–26 2014, WAM! took in reports of Twitter-based harassment, assessed them, and escalated reports as necessary to Twitter for special attention. WAM! used a special intake form to collect data and promised publicly to publish what it learned from the data it collected. In three weeks, WAM! reviewers assessed 811 incoming reports of harassment and escalated 161 reports to Twitter, ultimately seeing Twitter carry out 70 account suspensions, 18 warnings, and one deleted account. This document presents fidings from this three-week project; it draws on both quantitative and qualitative methods.
Findings focus on the people reporting and receiving harassment, the kinds of harassment that were reported, Twitter’s response to harassment reports, the process of reviewing harassment reports, and challenges for harassment reporting processes.
Year of publication
2015
- 4518 views
Add new comment