Collage with statute La Pensadora (Thinking Woman) by José Luis Fernández in Spain
The ‘Architectures of Online Harassment’ was the first in a two-part post that described the context and motivations of Tactical Tech’s work addressing the problem of online harassment through the lens of interface design. In this second post, I describe the results and outcomes of the workshop developed by Caroline Sinders and myself. Caroline will also independently publish a post detailing the methods and processes developed for this workshop and its results.
Caroline has been working with design-thinking approaches to understanding the architectures of harassment, and we were keen to apply this with a group of people working on and experiencing harassment online.
‘Design thinking’ is an array of techniques that broadly involves participatory methods for taking apart and understanding problems; it may involve approaches like prototype-building; iteration; in some cases, design thinking can also be about developing solutions, or for conceptual clarity. (We’re not referring to the corporate applications of design thinking, nor the struggles in the business world about whether design thinking is ‘over’ or not.) In this workshop, we applied design-thinking as an approach to collectively building out stories to understand the problem of harassment from a different perspective.
The design thinking techniques and exercises used within our workshop were specially created by Caroline to focus on studying harassment. We wanted to present these to the group to see how these could be used to take apart the dynamics of online interactions of different kinds to understand its constituent parts and actors.
We gave each group a different use case- one group was to work on campaign harassment, meaning harassment from many sources. The other group dealt with more interpersonal, and one-on-one harassment.
We did not give the groups explicit details of the cases they were to work on. We wanted to see how they developed the storyline and responded to prompts. This is a significant feature of a design thinking when it is used to learn about problems: to not be prescriptive or specific, but to allow for ambiguity in constructing what the problem is. Unlike campaigning and advocacy that script specific stories for the purpose of amplification, or a call to action, this approach attempts to leave received notions of an issue to one side. We imagined this exercise as a sort of projective technique, knowing that everyone in the room had a fair degree of familiarity with the topic.
Working in small groups, participants were given choices of storylines to develop. One group was asked to develop a scenario of either two exes having a political disagreement online, or of a fight between two classmates that evolves into a bullying situation. The other group was asked to develop a story of an activist who faced harassment online, or of a journalist being attacked from readers for a story she had written. In response, the first group told the story of an anti-police brutality group that was attacked online for posting a report it had released. The second group constructed the story of two friends using Snapchat and where a personal interaction escalated into anger and bullying.
Design beyond solutionism
In the story of the two friends – “Casey” and “Logan” – Snapchat becomes a site of an emotional escalation; they start aggressively screenshot-ing each other’s Snaps as ‘evidence’ of the other’s sarcastic, snarky, and bullying behaviour. The group felt that this escalation towards mutually assured destruction could have been addressed by just talking to each other, perhaps. Was it possible to not get into a back-and-forth reactionary conversation with each other? Who or what could suggest other forms of communication rather than just talking through Snaps?
The group discussed the possibility of the system itself being designed to become more responsive to the nature of the interaction. Could the system log and identify rapid exchanges between the two as a potential indication of an escalation, or an impending fight, and possibly introduce a pop-up or alert to ask if the two perhaps just needed to talk elsewhere and off that platform? What if that rapid exchange was not a fight but actually a passionate exchange between lovers? If we think that a system could be programmed to become more responsive, what does that actually mean, what are the trade-offs, and what are the limits to interface design?
The group discussed the possibility of the system itself being designed to become more responsive to the nature of the interaction. Could the system log and identify rapid exchanges between the two as a potential indication of an escalation, or an impending fight, and possibly introduce a pop-up or alert to ask if the two perhaps just needed to talk elsewhere and off that platform?
In unpacking the history of research about anonymity online, Nate Matias finds that there is a problematic assumption that it will be possible to ‘design away’ harassment by making systems more responsive.
Some initiatives are already underway. Machine learning is one of the ways in which Google’s Jigsaw is attempting to do this with its Perspective API. Jigsaw is also working with the Wikimedia Foundation to address the abuse and trolling on Wikipedia.
Matias cautions designers to check the assumptions framing the reality of online harassment, such as the notion that design can provide a solution to a socio-cultural problem like harassment; and that designers should commit to experimenting and testing approaches to learning more about how to respond to and prevent the ways in which internet spaces are used and abused. He writes: “…Designers need to acknowledge that design cannot solve harassment and other social problems on its own. Preventing problems and protecting victims is much harder without the help of platforms, designers, and their data science teams…”
Designers need to acknowledge that design cannot solve harassment and other social problems on its own. Preventing problems and protecting victims is much harder without the help of platforms, designers, and their data science teams…
Our workshop was a test of an ethnographic method as to how specific cohorts use different platforms, and how this could be a way to understand the intricacies of how power works in these spaces. We discussed hopes that a granular approach need not feed into a “one ring to rule them all” approach, but that ethnography may actually inspire more custom interfaces and online experiences.
Moderation, and forms of social media arbitration, mediation and community-regulation models were suggested by participants as alternate suggestions to de-escalation in the scenario where two friends began squabbling over an image shared on Snapchat.
Enforcing ToS
The group working on the activist storyline talked about possible hopes they might want to share with designers of systems. One of these was to do with limiting the information trail that enables doxxing. They looked up the Terms of Service of Pastebin, the site where doxxed information tends to be shared, and found that Pastebin actually prohibits the sharing of personal information. So an important recommendation for lobbying and advocacy would be for a platform like Pastebin to actually enforce their ToS.
Screengrab taken on January 19, 2017
What next
In conclusion, participants felt that confronting online harassment cannot just come from silos working independently, and that methods like this one offer perspectives that those working as activists, or Freedom of Speech advocates, cannot always access. While it might have been somewhat confusing at first as an approach and method, some felt (and good feedback for us!) that getting into the details of cases of harassment outside of an advocacy or direct support model was illuminating
One of the reasons we are choosing to work with activists and designers is that challenging online harassment cannot just come from silos working independently, and we’re happy to see new collaborations and thinking in this area.
Connecting this back to Tactical Tech’s motivations that I started this two-part post with, we’re also acutely aware that while online harassment as an issue generates interest, there is still much confusion about what it is in legal, technical, business, and semantic terms. The frame of Freedom of Speech does not necessarily capture the breadth of what women experience as harassment and restrictions of their speech online. This workshop was a small step in expanding that thought out. So we will continue to support feminist and women’s rights communities to engage in these discussions with platforms, and other members of civil society. Look out for more Sinders-Tactical Tech collaborations in the future!
Maya Ganesh is Tactical Tech’s Director of Applied Research, and leads the organisation’s Gender and Technology Project. The project convenes Gender and Technology Institutes in the Global South that is working to expand debates around the social and political implications of identity-based harassment online.
Caroline Sinders is an Eyebeam and BuzzFeed Open Lab fellow, as well as an artist and machine learning designer. She is currently based between San Francisco and Brooklyn. Caroline is currently exploring surveillance, conversation, politics, online harassment and emotional trauma within digital spaces and social media at her fellowship.
This workshop was supported by Tactical Tech and was possible thanks to the work and active participation of Anne Wizoreck, Dia Kayyali, David Huerta, Emily Deans, Holly Hudson, Rachel Uwa, Rebecca Rukeyser, Trammell Hudson, and Vanessa Rizk.
- 5758 views
Add new comment