PlumX Metrics
Embed PlumX Metrics

LFSCA-UNet: liver fibrosis region segmentation network based on spatial and channel attention mechanisms

Journal of Image and Graphics, ISSN: 1006-8961, Vol: 26, Issue: 9, Page: 2121-2134
2021
  • 5
    Citations
  • 0
    Usage
  • 4
    Captures
  • 0
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

Article Description

Objective: Liver fibrosis is a common manifestation of many chronic liver diseases. It can develop into cirrhosis and even lead to liver cancer if not treated in time. The early diagnosis of liver fibrosis helps prevent the occurrence of severe liver disease. Studies have shown that timely and correct treatment can reverse liver fibrosis and even cirrhosis. Therefore, the accurate assessment of liver fibrosis is essential to the clinical treatment and prognosis assessment of liver fibrosis. At present, the diagnosis of liver fibrosis in the medical field is evaluated through liver biopsy, which is generally a safe procedure but invasive. The complications of liver biopsy are rare but potentially lethal, so noninvasive diagnosis methods based on imaging have attracted considerable interest. Method: This paper proposes a network for the segmentation of liver fibrosis regions, called LFSCA-UNet(liver fibrosis region segmentation network based on spatial and channel attention mechanisms-UNet). It has improved the U-Net with two different attention mechanisms. U-Net is a convolutional neural network used for image semantic segmentation. Attention U-Net is an improved version of U-Net, it adds a group of attention gate modules into each skip connection of the original U-Net. The attention gate modules in attention U-Net is a spatial attention mechanism. LFSCA-UNet adds a channel attention mechanism to each skip connection structure. In this study, the efficient channel attention(ECA), which is a channel attention mechanism based on the squeeze and excitation network, was used in implementing the added mechanism. The core idea of the squeeze and excitation network is to allow networks to automatically learn dependencies between channels. This network changes a conventional convolution layer to a convolution layer with a squeeze and excitation block, which can be divided into two parts: squeeze and excitation. The squeezing part uses global pooling to obtain a feature vector of a current convolutional layer feature map, whereas the excitation part uses two fully connected layers with different numbers. The first drop of the dimension and the second upgraded, and finally, the weight of each channel is obtained after sigmoid activation, which is multiplied by the original feature map as the input of the subsequent layer of the network. The efficient channel attention block is an improvement of the squeeze and excitation block, which removes the part of reducing dimension and uses 1 d convolution instead of the fully connected layer. It has better performance and fewer parameters. The CT(computed tomography) images used in this study was obtained from 88 patients with liver fibrosis and provided by the Department of Liver Surgery, Renji Hospital, Shanghai Jiao Tong University School of Medicine. One Nvidia Tesla P100 graphics cards with 16 GB memory were used in training networks, and Python 3.8.5 and PyTorch 1.7.1 were used. Result: This paper horizontally compared five different experimental networks according to five different indicators, namely, Dice coefficient, Jaccard index, precision, recall (sensitivity), and specificity. LFSCA-UNet gets the highest result of mean Dice coefficient (0.933 3), better than the original U-Net (0.539 6%). Conclusion: This paper verifies that the combination of spatial attention and channel attention mechanisms can effectively improve the segmentation result of liver fibrosis. For the spatial attention module, using the channel attention module in optimizing inputs can increase network stability and optimizing outputs can improve the overall effect of the network.

Bibliographic Details

Hongyang Chen; Jingyang Gao; Di Zhao; Ji Wu; Feng Xue; Jinjun Chen; Xianyue Quan; Xinming Li; Muyao Zhou; Bingbing Bai

Aerospace Information Research Institute, Chinese Academy of Sciences

Computer Science

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know