“Take your foot off the brake.”
“Men are hardwired to escalate. To touch. To initiate sex.”
“But our upbringing has put brakes on that drive. We are constantly admonished to not be creepy, sleezy or pervy by our parents, peers and media. Some of this is understandable in cases where men touch women who don’t want to be touched by them.”
“We get bombarded with images where women reject men’s advances. In reality, this rarely ever happens. Women love to be touched by men and have sex with men.”
This is quoted from a thread titled “Escalation for Dummies” on r/TheRedPill, a user-led community on Reddit dedicated to “discussion of sexual strategy in a culture increasingly lacking a positive identity for men.”
The Red Pill refers to a scene in The Matrix where Neo is asked to either take the blue pill and remain asleep or take the red pill and learn the truth. The truth defined by this Reddit community is that the feminist movement has led to the oppression of men.
The community has become a haven for men to discuss alpha male versus beta male behavior and strategies to pick up women. Other posts on the subreddit contemplate the ethics of rape and posit that the number of men a woman has slept with predicts infidelity.
The community was created on Oct. 25, 2012, and has amassed hundreds of thousands of members. The content in the community has become so heinous that it has been “quarantined” and is much more difficult to access than other Reddit communities.
Who are the members of this community, and do you know any of them?
In 2017, The Daily Beast published an article exposing a New Hampshire state lawmaker as the creator of r/TheRedPill. An account titled @PK_athiest created the forum back in 2012. The Daily Beast revealed through its investigation that the account belonged to Robert Fisher, the son of a preacher and two-time representative of New Hampshire’s ninth district.
Fisher’s first post welcoming members to the community stated, “A guy can approach a woman, be assertive, and if she’s attracted, there’s a hookup. Yet, if he’s not attractive, this EXACT behavior is “creepy”… If you’re unattractive, feminism tells us, you’re likely a rapist… men are tiptoeing to make sure they don’t accidentally become rapists themselves.”
While Fisher resigned in 2017, questions remain. Who are the thousands of members of this community? What type of person feels inclined to share such hateful rhetoric online and do they exist within your social orbit?
Dr. Caitlin Carlson, an associate professor at Seattle University whose research focuses on media law, policy, and ethics from a feminist perspective, believes that the anonymity of the internet brings out a different side of people. Because Robert Fisher was able to hide behind many aliases, he was able to express his true opinions about women online.
“They’re people who would not necessarily say these things in person, right?” Carlson explained. “We know that they tend to be overwhelmingly male, they tend to be younger.”
Carlson believes that a sense of powerlessness drives young men toward hate speech. They may feel that their place in the social hierarchy is under attack.
“They tend to be easily influenced by or persuaded by certain forms of what I want to say media, but it’s not what we would think of as traditional media. This kind of extremist 4chan or 8chan or Gab is another place, obviously Reddit,” Carlson said.
Interestingly, Dr. Joseph Uscinski, a professor at the University of Miami who studies public opinion and mass media with a specific focus on conspiracy theories and related misinformation, disagrees with Carlson about the types of people who make negative posts online.
“I don’t think that the gender and age stuff really matter all that much,” Uscinski said.
Uscinski has had first-hand experience with targeted hate and online harassment. Most recently, a post shared on 4Chan, an anonymous imageboard website, accused Uscinski of being a Jewish professor involved in covering up a Jewish conspiracy.
“It included a picture of what was supposedly me, but it was some 90-year-old guy with a beard in the front of a classroom that wasn’t even a UM classroom,” Uscinski said.
The person behind the conspiracy theory claimed to be a former student. The post said that Uscinski tried to convince students that Jewish people weren’t behind any conspiracy, but the alleged former student knew better.
“I’m pretty sure, obviously, this was not a student, because not only am I not Jewish, but the picture wasn’t me. And I don’t think any of my students would ever write any such thing,” Uscinski said.
The conspiracy theory expert described a widely debated concept of mismatched behavior, the idea that most internet users are nice people in person but placing them in an online environment draws out negative behavior.
“There have been careful studies in the last few years that show that there isn’t really a mismatch. Most people that are online are completely fine and normal. It’s just that a lot of online behavior comes from a very small number of people,” Uscinski said.
Uscinski referenced a study published in Cambridge University Press from 2021. Researchers Alexander Bor and Michael Bang Petersen concluded in their study that online discussions are not more hostile than offline discussions even though they may feel like it. They stated that hostile individuals can easily identify targets and supporters online and thus, have a larger reach.
“A lot of bad online behavior comes from people who are antisocial in real life,” Uscinski said. “There are other studies like this one, where you find the people who act racist online are racist offline. People who are extremists online are extremists offline in various ways.”
People who feel the need to spread hate online probably have high levels of narcissism, psychopathy and other antisocial personality traits, according to Uscinski. These angry, hateful people have always existed but with the vast expansion of the internet now anyone can seek out their content at will.
Sometimes, they will seek out your content.
Lillian Engelhard, a senior marine science and biology major at the University of Miami, enjoys posting videos on Tik Tok flaunting her trendy outfits and hairstyles. It is a benign hobby that usually only attracts likes and comments from her closest friends. However, when she posted a video on Nov. 18 showing off a cool, new hairstyle @user7916755242539 commented “Not even fucking close lmao.”
Upon further investigation, the comment originated from a private account with zero followers and no profile picture. These characteristics and the generic username suggest that the account is probably a burner account created by someone who wishes to make comments anonymously.
To combat hostile content and manipulation of social media accounts each platform curates their own terms of service and community standards. These standards help social media companies moderate content on their platforms by defining the rules. If a user violates these rules, their post is subject to moderation.
Seattle University’s Carlson described that traditional content moderation comes in three parts. The first part includes the community standards that everyone agrees to when they sign up. The second part involves automatic detection which employs an algorithm and artificial intelligence. The final part requires the interaction of users with content.
“The last piece is what we call community flagging, which is when individual users say, ‘Hey, I think this violates the community standards,’ that is then reviewed, either by the AI or eventually, potentially a human content moderator,” Carlson said.
The technology and logistics required to effectively moderate online content have yet to be developed. In the meantime, social media companies lean on paid human moderators and users who volunteer to do free work.
“We’re free labor, right? Like when you ask users to flag content, it’s great for you. But I do think there should be a combination of AI and human content moderators,” Carlson said.
AI technology struggles to understand certain contextual frameworks and is restricted by the number of languages it knows. AI technology does not operate in every language and therefore, cannot be employed effectively throughout the world, according to Carlson.
Each subreddit has its own rules that are enforced by mods or moderators. These are people with a vested interest in the community and police content themselves to prevent the forum from getting off-topic or alienating members.
“Each individual subreddit will have its own rules, which it can choose when and how to enforce,” Carlson said. “Same thing on Facebook groups, like kind of these enclaves where unless the AI finds something considered a violation of the company’s overarching policy it is not necessarily a violation of the subreddit or Facebook group’s policy.”
Naomi Rodriguez, a 20-year-old pursuing an Interarts Performance BFA at the School of Music, Theatre, and Dance at the University of Michigan, is a Discord moderator. Discord is another social media platform for user-driven communities. Rodriguez moderates a fan-driven touring server where people interested in live music can engage with other people interested in live music.
“What I’m doing now is basically just monitoring who is entering the server and not essentially verifying, but checking if they’re a real person,” Rodriguez said. “And, also, just checking the participation within the server because you want to keep it really engaged, but also keep it to a point where it’s not too much, and people are being respectful of each other.”
The work she does daily is unpaid.
Rodriguez has been on social media since she was 10 years old and has a passion for helping people navigate the internet safely.
While the discord server she currently moderates doesn’t often produce nasty or hateful content she found herself having to moderate conversations about LGBTQ rights in a discord she created for her high school.
“I saw a shift in the maturity of the conversation between the juniors and seniors, and the juniors are much more leaning towards a very hateful narrative,” Rodriguez said. “I remember having to be like, if you continue this, I have to essentially remove you and this is definitely gonna go to the school board.”
Rodriguez said that she struggles with balancing school work and content moderation.
“It’s just having to really pay attention to the messages because I’m also a student. So, I’m also just like, I have classes and everything and then I get notifications and I can’t pay attention right now because I’m doing my classwork or something like that,” she said.
When the content produced by an online community becomes too volatile for moderators to police, there are other ways to restrict the content.
Reddit utilizes a quarantine status for some communities too inflammatory for most users.
“It makes them less visible or accessible to general users. So, they’re not necessarily shutting them down. They’re just not showing,” Carlson said. “The idea is to try and keep it from bleeding out into other spaces.”
r/TheRedPill is a quarantined community. Before allowing access to the forum, a warning pops up informing users that the community is quarantined because of shocking or highly offensive content. It asks, “Are you certain you want to continue?”
Perhaps this is a question we all should be asking ourselves. It’s easy to mark “yes” when asked if you agree to the terms of service when you create an account on Facebook or Twitter, but do you know exactly what you’re agreeing to? Are the community standards up to your standards?





Leave a comment