Networks of Negativity: Gaining Status and Reinforcing Stereotypical Norms in Online Aggression

Diane Felmlee, Sara Francisco, Chris Julien

Contact: dhf12@psu.edu

Aggressive, harmful messages on social media represent a routine occurrence. Approximately 13,000 bullying-related messages and at least 419,000 sexist slurs occur daily on Twitter, and these messages can spread widely in networks of online interaction. Recipients of negative messages often experience a range of problematic psychological, emotional, and behavioral consequences. Yet, little is known about the factors that lead individuals to post damaging online messages. In this paper we examine the processes that contribute to networks of cyber aggression on Twitter. We argue that electronic aggression emerges out of two standard group processes. The first process is the establishment of “pecking orders,” or status hierarchies, and the second involves the reinforcement of social norms. We maintain that perpetrators attempt to increase their social standing in the virtual world by sending negative messages that they believe will gain attention and be retweeted and liked by their followers. In addition, aggressive, deleterious tweets tend to reflect social norms that reinforce traditional stereotypes, such as those for women and people of color. Therefore, we expect that the negativity of tweet sentiment will be positively related to the spread of retweets. We also anticipate that the content of negative messages will contain sexist and/or racist words that reflect traditional stereotypes. We collected two samples of tweets directly from the Twitter API. One sample contained approximately 12,500 tweets containing the derogatory slur “b*tch.” We find that retweeted interactions are significantly more negative in sentiment than those that are not retweeted (p<.01), and the largest percentage of retweets, out of all tweets, are also the most negative in sentiment. Our second sample included just over 35,000 tweets containing the terms “Asian” or “Mexican.” In this sample, we also uncover a central role of retweets in spreading negative sentiment and racial slurs: retweets are significantly more negative than tweets (p<.001) and replies (p<.001). Furthermore, retweets comprise a larger percentage of negative interactions, compared to positive interactions, thus demonstrating that negative messages are retweeted at a greater likelihood than positive messages. Overall, we find that highly negative, gender- and race-related tweets are likely to gain notice and be dispersed within social media. The content of harassing tweets often reinforces traditional, normative behavior, as well. Highly retweeted, nasty messages that accuse women of being a “b#tch,” for example, suggest that women should align with conventional, normative expectations to be docile and nice. We document numerous instances in which tweets that proliferate on Twitter in a virtual network of retweets, replies, and mentions bolster sexist and/or racist stereotypes. Our findings demonstrate the capacity of aggressive tweets to spread across social media and create complex social networks. The results suggest that the practice of sending such messages is strategic. Given that highly negative messages garner extensive, online attention, perpetrators likely engage in cyber aggression, consciously or not, to increase their online social status. They do so by sending messages that reinforce traditional, normative stereotypes that are apt to receive online attention. Moreover, this strategy appears to be successful.

← Schedule