For the last few years I’ve been working on my master’s thesis titled Understanding Harassment Inside Online Communities: The Role of Social Identity Threat for a degree in Psychology and it has been officially published! You can read the whole thing right now, but because most people don’t enjoy reading 88 page PDFs I decided to adapt it into the modern format of overly long newsletter posts. This is Part 1 and covers the personal experiences and prior academic work that inspired my research. Part 2 covers the procedure and results from the two internet surveys about online harassment that I sent to several hundred participants. These posts are annotated in lazy-APA style with links to papers that aren’t paywalled, but you can check the thesis for proper academic citations. This was the largest self-directed project I’ve ever undertaken, so what question was important enough to spend months answering?
What Actually Causes Online Harassment?
If you’ve been on the internet for more than 5 minutes, it’s obvious that many people are very mean to each other online for often confusing reasons. Sometimes this is just “harmless” trolling, and other times it literally destroys lives. Online Harassment is a huge problem in modern society and seems to be getting worse every year (ADL 2022). Psychologists, journalists, and other researchers have been studying this for decades, and it’s clear that online harassment (like most complicated social behaviors) happens for many different reasons. Some of these reasons are tied to unique properties of the internet as a medium, but many of these reasons are outgrowths of the evolved human instinct to defend important groups like our families and tribes. Humans have been harassing (and killing) each other for thousands of years, and despite the promise of 90s hacker utopianism the internet hasn’t made us all get along. Online harassment is a huge problem, but what could I do about it?
I’ve spent my life playing and developing online multiplayer games, starting as a programmer on City of Heroes in 2005 and later helping to create Fortnite (no I can’t give you V-Bucks). Most of my work has been technical (I am now an Unreal Engine contractor), but I’ve always been interested in the social dynamics of the games I played and helped to develop. I started on this path when I took classes for a Game Development minor in college and enjoyed them enough to graduate with a dual major in Computer Science and Psychology. Multiplayer online games provide a natural laboratory for Social Psychology research because they give players the opportunity to experience a larger variety of social situations than they would in real life. Many players take this opportunity to participate in toxic behaviors like online harassment (Kowert 2020) that make things worse for everyone, and as a developer I’ve always wanted to know why. If we understand why people choose to harm each other, we can design better social systems that can discourage that behavior.
The other big thing that made me care about online harassment was the events surrounding GamerGate that led members of the gaming community to seriously harass game developers, journalists, and players. Looking back now, GamerGate is best understood as part of a larger trend towards organized online harassment campaigns but at the time I was shocked by the number of death threats targeted at people I knew and respected. When I started working on my master’s degree in Psychology while on a break from the burnout common among game developers, I made a point to take classes covering topics like online misinformation and psychological network effects that could explain why people would violently threaten complete strangers. So what did I learn about the causes of online harassment?
We Harass To Harm Our Enemies
A lot of the academic research into online harassment (often using the very 90s term Cyberbullying) focuses on aggressive situations where the harassment is used to harm individuals belonging to groups that are seen as a threat. Some of this is obviously related to explicit real-world prejudices like racism and sexism that have been extensively studied in Social Psychology and elsewhere. More recently, the evidence shows that political partisanship strongly motivates online harassment (Pew 2021). Pundits have written hundreds of articles bemoaning the modern death of “civility”, but it is not at all surprising that humans want to harm members of groups they see as explicit competitors in the fight for important resources and ideological supremacy. The simplest theory to explain online harassment is that it is a modern extension of the aggressive impulses that evolved in our human ancestors (McDonald 2012).
A competing theory is that online harassment is mostly caused by the evils of specific aggressive activities like playing violent video games. This has been debated endlessly in academic and political circles (Prescott 2018), so I’m not going to spend too much time on it here. The basic summary is that doing aggressive things in one context can lead to an increase in aggression in other contexts, but only if the contexts overlap in some important way. Being shot at by virtual avatars in a multiplayer game does not motivate us to shoot real people in the future, but the frustration of losing a match may motivate us to harass someone online. We need to investigate the details of how aggression in some contexts can lead to violence and psychological attacks in other contexts instead of falling back on broad statements like “violent games are evil”. The truth is that a large number of humans (often male) enjoy participating in aggressive activities and it is not surprising that some of those happen on the internet.
Another popular theory is that online harassment is mostly caused by the anonymity and lack of consequences for bad behavior on the internet. This is called the Online Disinhibition Effect (Suler 2004), or the Greater Internet F-wad Theory for those of us who grew up reading webcomics, and helps to explain the high rates of harassment in contexts like online gaming where people use pseudonymous handles. The research has shown that while this does have a real effect (Lefler 2012), it cannot explain most of the harassment that takes place in modern internet contexts. Social networks like Facebook that spent decades trying to outlaw anonymity (in order to sell more ads) still have to deal with tons of harassment from people who are perfectly willing to use their full legal names to post obvious racial slurs.
Related to that theory is the general concept of Moral Disengagement, which describes how people convince themselves that ethical standards do not apply to their specific situation. One way to do this is to use moral justification and reasoning to claim that the harmful behavior serves some sort of greater moral purpose. Reliance on euphemisms (“locker room talk”) and diffusion of responsibility (everyone else does it) are also commonly seen online. In a study directly investigating justifications for toxic behavior in video games (Beres 2021), researchers asked participants to listen to audio clips from real Twitch streams where female players experienced online harassment. Participants were then asked to rate the clips for toxicity and say if they would report the culprit. The data analysis showed that participants with higher levels of moral disengagement and online disinhibition reported significantly lower toxicity ratings for obvious cases of harassment and were less likely to report the offenders.
Clearly these demographic and internet medium factors help to explain why online harassment is so common, but they can’t explain all harmful cases of harassment. Also, these topics are already being studied by well-funded research groups and often inspire exhausting political arguments so I didn’t see much point in trying to weigh in as a random graduate student. In order to perform some actually useful psychology research I decided to investigate the contextual factors which hadn’t already been exhaustively explored. That led me to investigate why we often harass people that we do not see as our enemies.
We Harass To Police Our Friends
One of the biggest inspirations for my thesis was the work presented by researchers and developers at the Fair Play Summit and other sessions that I attended at the annual Game Developers Conference. The Fair Play Alliance is an interest group within the video game industry that supports the use of scientific research to improve the lives of game developers and players, and they have spotlighted research on harassment inside video game communities. Ten years ago I used to play a LOT of League of Legends, which has always had a harassment problem and has been extensively studied by talented researchers (Kwak 2015, Kordyaka 2018, Monge 2021). League of Legends provides an interesting context to study harassment because most of it takes place between players on the same team who need to work together in order to defeat another team which is clearly identified as the enemy.
Many of the earliest multiplayer video games were explicitly hyper-aggressive, where games like Quake encouraged players to kill other player avatars in viscerally violent ways. These sorts of games tend to feature a lot of “trash talking” where players insult each other in order to demotivate or distract the enemy. Inside the game development community, many designers thought that encouraging players to meaningfully cooperate with each other would make them be nicer to each other. But it turns out the exact opposite is true: It’s fairly easy to ignore insults in free-for-all games where they are obviously motivated by competition, but insults from teammates hit harder in games like League of Legends and Dota 2 that require players to work together for extended periods of time. Research into harassment in cooperative games has found that it often functions to identify and punish teammates for the perceived failures of the group as a whole. Some of this is psychological displacement but it is also related to more complicated social dynamics inside a group.
In the physical world, harassment is often used inside human social groups to police the behavior of new or prospective members with the goal of increasing the success of the group as a whole. Inside tight groups like sports teams or college fraternities we call this hazing and I found that the research of Aldo Cimino (2011 and 2013) provides a good explanation for why harmful hazing behavior is so common across a wide variety of human institutions. Using the framework of Evolutionary Psychology, Cimino’s research uses empirical data to show that hazing which makes it painful to fully join a group can filter out “weak” potential members and increase cohesion among the strong members. This kind of hazing is common online inside smaller communities that mirror conventional sports teams and school clubs.
Trolling behavior can perform the same functions as hazing in larger, more open online communities. Researchers who study the origins of internet trolling in online newsgroups (Graham 2019) explain how it started as the purposeful spreading of misinformation with the goal of maintaining clear boundaries between the “real members” of an open group and imitators. Longtime members are afraid of the posers and newbies who want to enjoy the benefits of group membership without learning how to properly behave inside a group. Over time the definition of trolling has expanded to include all sorts of disruptive behavior (Cook 2017), but it often functions alongside memes to police the fuzzy borders of open online groups and punish outsiders and newcomers for violating implicit group rules.
If people often use online harassment to police the boundaries and behavior inside online communities, what psychological theories could help with reducing the worst forms of this kind of harassment? Several of my graduate classes helped me to understand the importance of identity-related theories for explaining this behavior.
We Harass To Protect Our Social Identities
Social Identity Theory originated in the 1970s based on work by Henri Tajfel and John Turner and describes how we form the part of our self-concept that is based on membership in relevant social groups. Most of the current research and controversy related to identity theory focuses on visible identities like race and gender, but the same psychological principles apply to optional identities like political partisanship and membership in arbitrary groups like r/eatsandwiches (which is clearly superior to r/sandwiches). Our psychological identity is very important for explaining motivation in general and social identity is particularly relevant for explaining harassment.
One of the core ideas of social identity theory is that we all want to maximize the positive distinctiveness of social groups we identify with (Tajfel 1986). This means that we all want to belong to groups that provide a form of positive and unique value different from what is provided by other groups. A social identity threat is any action that we perceive as a threat to the value of our social identities. This includes obvious threats like racist insults, but also includes threats to the uniqueness, practical usefulness, or reputation of any group that (we think) we belong to. These threats can be roughly categorized (Branscombe 1999) as being threatening to an individual’s status inside a group, distinctiveness of a group compared to other groups, the practical/usefulness value of a group, and the moral/reputational value of a group.
Using this framework, harassment inside social groups could be seen as happening in response to perceived social identity threats. We haze new and prospective members of our groups to keep weak or undesirable people out and raise the practical value of a group. We use trolling and exclusionary language to clearly identify the boundaries of vaguely defined groups and make those groups feel more special. We harass the other players on our League of Legends team to punish them for making mistakes that lower the perceived value of the group, even if that harassment will make everyone on the team lose the game and feel awful. Social identity theory could explain many of the behaviors that make the internet worse for everyone.
Do Social Identity Threats Cause Harassment Inside Online Communities?
So that finally leads to the core research question of my thesis: Do situations inside online groups that create social identity threat motivate online harassment? While this seems like a pretty obvious question, as far as I know it has not been answered using scientific methods. In order to approach this question, I decided to investigate a variety of situations where normal internet users might endorse the use of harassment inside a group. While doing my background research I found good evidence that the perception of moral faults motivates increased harassment of strangers on Twitter (Blackwell 2018) and that gender-related identity threat motivates increased sexual harassment (Maass 2003) but it was unclear if social identity threat in general would motivate increased harassment inside online groups.
Now that I had identified an interesting research question worth answering, I had to figure out how to actually answer it! This post is long enough so in the next one I’ll explain the method I used to approach the problem and the interesting stuff I learned during that process. There are charts and everything.