PlumX Metrics
Embed PlumX Metrics

Enhancing accuracy of compressed Convolutional Neural Networks through a transfer teacher and reinforcement guided training curriculum

Knowledge-Based Systems, ISSN: 0950-7051, Vol: 306, Page: 112719
2024
  • 0
    Citations
  • 0
    Usage
  • 0
    Captures
  • 1
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

  • Mentions
    1
    • News Mentions
      1
      • News
        1

Most Recent News

New Networks Study Results Reported from Anna University (Enhancing Accuracy of Compressed Convolutional Neural Networks Through a Transfer Teacher and Reinforcement Guided Training Curriculum)

2024 DEC 19 (NewsRx) -- By a News Reporter-Staff News Editor at Network Daily News -- Research findings on Networks are discussed in a new

Article Description

Model compression techniques, such as network pruning, quantization and knowledge distillation, are essential for deploying large Convolutional Neural Networks (CNNs) on resource-constrained devices. Nevertheless, these techniques frequently lead to an accuracy loss, which affects performance in applications where precision is crucial. To mitigate accuracy loss, a novel method integrating Curriculum Learning (CL) with model compression, is proposed. Curriculum learning is a training approach in machine learning that involves progressively training a model on increasingly difficult samples. Existing CL approaches primarily rely on the manual design of scoring the difficulty of samples as well as pacing the easy to difficult examples for training. This gives rise to limitations such as inflexibility, need for expert domain knowledge and a decline in performance. Thereby, we propose a novel curriculum learning approach TRACE-CNN, i.e T ransfer-teacher and R einforcement-guided A daptive C urriculum for E nhancing C onvolutional N eural N etworks, to address these limitations. Our semi-automated CL method consists of a pre-trained transfer teacher model whose performance serves as a measure of difficulty for the training examples. Furthermore, we employ a reinforcement learning technique to schedule training according to sample difficulty rather than establishing a fixed scheduler. Experiments on two benchmark datasets demonstrate that our method, when integrated into a model compression pipeline, effectively reduces the accuracy loss usually associated with such compression techniques.

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know