In October 2021, I was home alone, in bed with the baby and still recovering from the C-section. It hurt to walk and I didn’t want to expose my newborn to COVID-19; I didn’t answer the door no matter how hard the maintenance guy knocked.
Then my phone rang. He was my downstairs neighbor. The hitter wasn’t the janitor, she explained. It was the police.
I headed for the door, painfully. Someone had called to report a rape in my apartment, the officers told me.
It wasn’t the first time I had been “run over,” a tactic some people use to make a false report that lures the police to my house. In the worst case, crushing can be fatal.
Since I started writing and organizing against the far right in 2018, I have been harassed by organized white supremacists. Threats and intimidation can start on the Internet, but they don’t stay there. Online harassment by the Philadelphia Proud Boys quickly turned into a vandalism visit to my building in 2019 and a planned bullying rally in front of me in Clark Park in 2020.
It was when I began organizing to shut down the explicitly pro-terrorist Nazi forums known as the Terrorgram that the harassment became life-threatening. Terrorgram exists on Telegram, a messenger platform known for its hands-off approach to moderation. Telegram’s laissez-faire environment allowed a universe of terrorist Nazi forums to proliferate, including channels specifically dedicated to doxxing (the disclosure of personal information such as addresses, phone numbers and names of members family) and harassment. Their targets included not only researchers like me, but also people whose only crime was to belong to a hated group. If you have a public profile and are trans, Jewish, black, or a member of one of the many other marginalized identities, you could very well find yourself in their crosshairs.
My work studying the dynamics of Telegram has taught me time and time again that moderation is important. On Telegram, Proud Boys eagerly shared my personal details, including addresses for me and my family, with each other in their chats.
On other channels, the Nazis photographed my decapitated head and on a meat hook, then shared those violent images with my speech and my exhortations to lethal violence against me. Shortly after, our neighbors informed us that strangers were knocking on the door of the building. Later, someone hit my parents, telling the police that my father had a gun and intended to kill me. A SWAT team surrounded their house with guns drawn, ordering my terrified sister and mother out of the house and onto the lawn.
That’s what I think of when I hear Elon Musk denounce alleged Twitter censorship. This is what I consider when he proudly announces that he is a “free speech absolutist” and that this mindset will inform his approach to Twitter, the platform he is about to buy and control.
Musk’s understanding of free speech is fundamentally flawed. Free speech under the First Amendment guarantees citizens the right to say what we want, not the right to be platform. Twitter isn’t obligated to allow its users to tweet the n-word or incite insurrection any more than Beyoncé is obligated to hand her microphone to any random viewer who wants to duet. . Freedom of expression gives us the right to say what we want, but it also gives others the right to decide who they want and don’t want to amplify.
Indeed, an all-out approach tends to silence the marginalized, even as it empowers those who are already powerful. Platforms that forgo meaningful content moderation — Telegram and 8chan, for example — inevitably become spaces that allow hateful people to bully marginalized people into fear-instilled silence. When online forums default to the kind of “free speech absolutism” espoused by Musk, there is a chilling effect. Vulnerable users are beginning to censor themselves to avoid harassment, knowing that hate online can easily lead to violence offline.
“When online forums default to the kind of ‘free speech absolutism’ that Musk embraces, there is a chilling effect.”
Twitter’s current moderation model is far from perfect. Its unnecessarily clumsy automation often allows hateful content to remain active, while often falsely flagging legitimate comments as banned content. Musk’s crusade for “free speech,” however, has nothing to do with protecting vulnerable people from censorship. If successful, his quest will only fuel harassment, allowing toxic actors to invade the website.
If Musk persists, Twitter will still be a censored platform. Censorship will simply take the form of mass harassment and death threats aimed at silencing those who are already struggling the most to be heard.
Gwen Snyder is a Philadelphia-based researcher, organizer, and writer who works to counter fascism and the far right. @gwensnyderPHL