PlumX Metrics
Embed PlumX Metrics

MFCEN: A lightweight multi-scale feature cooperative enhancement network for single-image super-resolution

Electronic Research Archive, ISSN: 2688-1594, Vol: 32, Issue: 10, Page: 5783-5803
2024
  • 0
    Citations
  • 0
    Usage
  • 0
    Captures
  • 1
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

  • Mentions
    1
    • News Mentions
      1
      • News
        1

Most Recent News

Study Data from Hohai University Update Knowledge of Technology (Mfcen: a Lightweight Multi-scale Feature Cooperative Enhancement Network for Single-image Super-resolution)

2024 DEC 23 (NewsRx) -- By a News Reporter-Staff News Editor at Tech Daily News -- Fresh data on Technology are presented in a new

Article Description

In recent years, significant progress has been made in single-image super-resolution with the advancements of deep convolutional neural networks (CNNs) and transformer-based architectures. These two techniques have led the way in the field of super-resolution technology research. However, performance improvements often come at the cost of a substantial increase in the number of parameters, thereby limiting the practical applications of super-resolution methods. Existing lightweight super-resolution methods, which primarily focus on single-scale feature extraction, lead to the issue of missing multi-scale features. This results in incomplete feature acquisition and poor reconstruction of the image. In response to these challenges, this paper proposed a lightweight multi-scale feature cooperative enhancement network (MFCEN). The network consists of three parts: shallow feature extraction, deep feature extraction, and image reconstruction. In the deep feature extraction part, a novel integrated multi-level feature module was introduced. Compared to existing CNN and transformer hybrid super-resolution networks, MFCEN significantly reduced the number of parameters while maintaining performance. This improvement was particularly evident at a scale factor of 3. The network introduced a novel comprehensive integrated multi-level feature module, leveraging the strong local perceptual capabilities of CNNs and the superior global information processing of transformers. It was designed with depthwise separable convolutions for extracting local information and a block-scale and global feature extraction module based on vision transformers (ViTs). While extracting the three scales of features, a satisfiability attention mechanism with a feed-forward network that can control the information was used to keep the network lightweight. Experiments demonstrated that the proposed model surpasses the reconstruction performance of the 498K-parameter SPAN model with a mere 488K parameters. Extensive experiments on commonly used image super-resolution datasets further validated the effectiveness of the network.

Bibliographic Details

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know