Anthropology professor Kendra Calhoun is delving into the world of social media language to uncover how users are creatively avoiding censorship by algorithms that may flag their content as inappropriate or offensive. In a recent interview with News Bureau life sciences editor Diana Yates, Calhoun discusses the phenomenon she refers to as „linguistic self-censorship.“
Calhoun, along with coauthor Alexia Fawcett from the University of California, Santa Barbara, highlighted in a recent report how platforms like TikTok are moderating content that touches on sensitive topics such as suicide, race, gender, and sexuality. This moderation has a disproportionate effect on users from marginalized communities, who fear having their content suppressed or removed.
The researchers found that users from all backgrounds engage in linguistic self-censorship, but those from marginalized groups express the most fear of content suppression. Communities most likely to have their posts removed or suppressed include Black, transgender, and queer individuals, as well as those with social or political beliefs that challenge those in power.
While social media platforms have public-facing community guidelines, the interpretation of these guidelines by moderators can lead to unequal application. Content moderation decisions are internal to the companies, leaving users unaware of how these conflicts are being resolved.
To avoid censorship, users on TikTok are creatively adjusting their speech by manipulating spelling, sound, and meaning. They use emojis, homophones, and other linguistic resources to replace words or phrases that may trigger algorithmic filters. Examples include spelling „gay“ as „ghey“ or using emojis like 🌽 for „corn“ to avoid detection.
Some of the most playful examples of linguistic self-censorship involve creatively substituting words like „white“ with references to white-colored objects, such as 🦷, 🧻, or 🚽. These new expressions not only help users avoid censorship but also build community by signaling shared knowledge, social connections, or specific viewpoints.
These linguistic innovations can have a lasting impact, migrating to other platforms or offline contexts. Words like „unalive“ for „kill“ or „accountant“ for „sex worker“ have found their way into everyday language, showing the influence of online language trends on offline interactions.
In conclusion, Calhoun’s research sheds light on how users navigate content moderation on social media platforms through linguistic self-censorship. By understanding these creative language strategies, we can better appreciate the complexities of online communication and the ways in which marginalized communities adapt to protect their voices.