PlumX Metrics
Embed PlumX Metrics

Predatory Conversation Detection Using Transfer Learning Approach

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ISSN: 1611-3349, Vol: 13163 LNCS, Page: 488-499
2022
  • 7
    Citations
  • 0
    Usage
  • 22
    Captures
  • 1
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

  • Citations
    7
    • Citation Indexes
      7
  • Captures
    22
  • Mentions
    1
    • News Mentions
      1
      • 1

Most Recent News

711 - Artificial intelligence and child sexual abuse: A rapid evidence assessment

URLs correct as at November 2024 *Included in review *Agarwal N, Ünlü T, Wani MA & Bours P 2022. Predatory conversation detection using transfer learning

Conference Paper Description

Predatory conversation detection on social media can proactively prevent the netizens, including youngsters and children, from getting exploited by sexual predators. Earlier studies have majorly employed machine learning approaches such as Support Vector Machine (SVM) for detecting such conversations. Since deep learning frameworks have shown significant improvements in various text classification tasks, therefore, in this paper, we propose a deep learning-based classifier for detecting predatory conversations. Furthermore, instead of designing the system from the beginning, transfer learning has been proposed where the potential of the pre-trained BERT (Bidirectional Encoder Representations from Transformers) model is utilized to solve the predator detection problem. BERT is mostly used to encode the textual information of a document into its context-aware mathematical representation. The inclusion of this pre-trained model solves two major problems, i.e. feature extraction and Out of Vocabulary (OOV) terms. The proposed system comprises two components: a pre-trained BERT model and a feed-forward neural network. To design the classification system with a pre-trained BERT model, two approaches (feature-based and fine-tuning) have been used. Based on these approaches two solutions are proposed, namely, BERT_frozen and BERT_tuned where the latter approach is seen performing better than the existing classifiers in terms of F and F -scores.

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know