This is the first in a series of posts reporting on the Global Meeting on Gender, Sexuality and the Internet held in Port Dickson, Malaysia from April 13 to 17, 2014, to envision a feminist Internet and to evolve a framework for it. Around 50 activists working on gender rights, sexual rights and Internet rights in different parts of the world had come together for the meeting.

When the hashtag #ImagineAFeministInternet took off on Twitter a couple of weeks ago, it soon attracted a spate of witty but misogynistic tweets, sparking off a hashtag war. Last year, the hashtag #misogynyalert aimed at calling out misogynistic content had met with similar attempts at appropriating it. Merely airing an opinion online is at times enough to invite thrust-and-parry situations with online misogynists. Going by this counter at, sexual slurs are doled out on Twitter every moment. How does one envisage and build a feminist Internet where it is difficult for digital misogyny to thrive or survive? What are the responses and strategies that go into its making?

Systemic Misogyny and Online Cliques

From seemingly random and disruptive comments by trolls and abusers intended to humiliate and silence women, digital misogyny has grown into an organised tool to curtail and invalidate women’s views social, political and religious views online. The most obvious manifestation of this is cliques of Twitter users carrying out concerted attacks on people with opposing views.

Anita Sarkeesian, who started a crowdfunding campaign to support her series of videos examining gender tropes in video games, received a barrage of abuse and attacks: attempts to hack into her Twitter and Google accounts, a denial-of-service attack on her website, rape threats on Twitter, and “Beat Up Anita Sarkeesian”, a video game that required users to repeatedly hit a photo of her face. Her supporters were subjected to similar levels of online abuse.

While online abuse and misogyny directed at public figures and women who have a substantial following are quicker to come to the fore, women from small communities (such as local journalists) receive abuse and hate on a smaller, more localised level.

Attacks on Women-Centric Content

Digital misogyny is also directed at the content and representation of women, and not merely at individual women or groups. For example, Wikipedia recognises the presence of gender disparity in its content on account of only 9 percent of its active editors being female, and making fewer edits than male editors in general. The gender gap among the editors leads to content that is less gender-sensitive and lacking the point of view of women and often leads to the absence or removal of women-centric content. A well-known example is that of the proposed deletion of a Wikipedia article on the wedding dress of Kate Middleton for being a “fluffy, girl topic”. Gendered imbalances online work in subtle and insidious ways that make digital misogyny difficult to spot, address, and contain.

Strategies for Tackling Online Misogyny

Reporting abuse: It includes reporting individual instances of abuse but also mobilising like-minded people to collectively report users who publish or sending misogynistic content. Some services primarily evaluate posts as abusive depending on the number of times they are flagged or reported, before delving into their contents. Mobilising support is often the only way to get such services to take note of abuse without taking legal recourse.

Ignoring abuse: Ignoring abusers whose intent may be to seek attention takes the wind out of their sails.

Mocking the abuser and enlisting the help of friends and followers in doing so: An activist narrated personal experiences of getting help from well-placed male friends who responded on her behalf to slurs directed on her on Twitter. The fear of being laughed or being made fun of inhibits abusers from going further. Also, when men call out or castigate other men for their misogynistic behaviour, it tends to act as a deterrent. Groups of friends or followers act as a counter-clique in situations where attacks are focussed and organised.

Naming and shaming the abuser: Reposting abusive content while identifying its perpetrator helps gather support and empathy. However, this strategy is more effective when dealing with individual abusers and not large groups.

Creeplists: A closed Facebook group of women bloggers in India maintains a crowdsourced list of people who have indulged in any kind of behaviour that made them uncomfortable. It helps them have each other’s back and pre-emptively identify an abuser.

The Way Ahead

Digital misogyny as hate speech: Social norms consider misogynistic speech acceptable when it is classified as a joke. There is a need to get misogynistic speech recognised as hate speech. In Lithuania, the No Hate Speech campaign, which has been running in 47 countries since 2012, successfully reached out to groups reported for misogynistic hate speech.

Sensitising the police about digital threats: When women do take legal recourse, the first stumbling block is often the police’s lack of familiarity with the platform or the way it works. It makes it difficult to articulate how the instances of abuse occurred and the imminent threats they may pose. One way ahead could be to introduce protocols for the police and raise awareness among law enforcement personnel.

Legislation: In some countries, online harassment is not seen as harassment and laws do not encompass digital violence against women. However, a point of caution here is that “women-friendly” laws end up overarching and coming down on free speech, an example being the infamous Section 66 of the Information Technology Act, 2008 in India.

Troll of the Day: A crowdsourced website that names and shames abusers, logs their acts and potentially asks viewers to decide the troll-of-the-day.

Multi-lingual ‘abuse reviewers’, equipped with knowledge of cultural context: The teams of reviewers who scrutinise flagged content on platforms such as Facebook are often not equipped to understand cultural connotations and linguistic contexts, and in turn, recognise content as misogynistic or hateful. More localised reviewers would be more effective in spotting and taking down misogynistic content.

Engage male ‘neutral’ audiences: Taking a cue from practices used to fight street sexual harassment, urging men to not remain “silent bystanders” could potentially act as a deterrent to digital misogyny.

Mobilise teenagers: Make misogyny look and feel uncool to teenagers.

Data mining: Use data tracing to track down the sources of misogynistic content.

Image: Roadmap to tackling digital misogyny.

Add new comment

Plain text

  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <br><p>