Linguistic Analysis of Toxic Language on Social Media
Abstract
The increasing popularity of online communication platforms entails a profound interest in the automatic detection of toxic language, since the effects of user anonymity or issues with content moderation can result in hostile environments. Linguistic analysis can be an important tool for discovering language patterns discriminating between toxic and non-toxic language, leading to the development of more robust detection systems. In this paper, we investigate several linguistic features of online Dutch toxic comments compared to non-toxic comments. We focus on three main research questions investigating the differences between the two types of comments: average length, lexical diversity, and linguistic standardness of comments. More specifically, we compared the average number of tokens per comment, the type-token ratio, (variants of) the content-to-function-word ratio, the propositional idea density, the use of emoji and emoticons, the punctuation to non-punctuation ratio, and measured the level of linguistic standardness combining features such as word choice, character flooding, and unconventional capitalization. The analysis was performed on the LiLaH dataset, which contains over 36,000 Dutch Facebook comments related to the LGBT community and migrants. We conclude that toxic comments are different from their non-toxic counterpart regarding all the investigated linguistic features. Additionally, we compared our results to Slovene and English. Our analysis suggests that there are commonalities but also remarkable differences in the linguistic landscape of toxic language across the three languages that may lead to further research.