A review on abusive content automatic detection: approaches, challenges and opportunities

Bedour Alrashidi*, Amani Jamal, Imtiaz Khan, Ali Alkhathlan

*Corresponding author for this work

Research output: Contribution to journalReview articlepeer-review

7 Citations (Scopus)

Abstract

The increasing use of social media has led to the emergence of a new challenge in the form of abusive content. There are many forms of abusive content such as hate speech, cyberbullying, offensive language, and abusive language. This article will present a review of abusive content automatic detection approaches. Specifically, we are focusing on the recent contributions that were using natural language processing (NLP) technologies to detect the abusive content in social media. Accordingly, we adopt PRISMA flow chart for selecting the related papers and filtering process with some of inclusion and exclusion criteria. Therefore, we select 25 papers for meta-analysis and another 87 papers were cited in this article during the span of 2017-2021. In addition, we searched for the available datasets that are related to abusive content categories in three repositories and we highlighted some points related to the obtained results. Moreover, after a comprehensive review this article propose a new taxonomy of abusive content automatic detection by covering five different aspects and tasks. The proposed taxonomy gives insights and a holistic view of the automatic detection process. Finally, this article discusses and highlights the challenges and opportunities for the abusive content automatic detection problem.

Original languageEnglish
Article numbere1142
JournalPeerJ Computer Science
Volume8
DOIs
Publication statusPublished - 9 Nov 2022

Keywords

  • Abusive content
  • Hate speech
  • Machine learning
  • Nlp
  • Offensive language

Cite this