PlumX Metrics
Embed PlumX Metrics

Improving the Identification of Abusive Language Through Careful Design of Pre-training Tasks

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ISSN: 1611-3349, Vol: 13902 LNCS, Page: 283-292
2023
  • 1
    Citations
  • 0
    Usage
  • 1
    Captures
  • 0
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

Conference Paper Description

The use of Deep Learning-based solutions has become popular in Natural Language Processing due to their remarkable performance in a wide variety of tasks. Specifically, Transformer-based models (e.g. BERT) have become popular in recent years due to their outstanding performance and their ease of adaptation (fine-tuning) in a large number of domains. Despite their outstanding results, the fine-tuning of these models under the presence of informal language writing, especially the one that contains offensive words and expressions, remains a challenging task, due to the lack of vocabulary coverage and proper task contextual information. To overcome this issue, we proposed the domain adaptation of the BERT language model to the abusive language detection task. In order to achieve this, we constrain the language model with the adaptation of two default pre-trained tasks, through the retraining of the model parameters. The obtained configurations were evaluated in six abusive language datasets, showing encouraging results; a remarkable improvement was achieved with the use of the proposed approaches in comparison with its base model. In addition to this, competitive results were obtained with respect to state-of-the-art approaches, thus obtaining a robust and easy-to-train model for the identification of abusive language.

Bibliographic Details

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know