PlumX Metrics
Embed PlumX Metrics

HookNet: Multi-resolution convolutional neural networks for semantic segmentation in histopathology whole-slide images

Medical Image Analysis, ISSN: 1361-8415, Vol: 68, Page: 101890
2021
  • 123
    Citations
  • 0
    Usage
  • 160
    Captures
  • 0
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

Article Description

We propose HookNet, a semantic segmentation model for histopathology whole-slide images, which combines context and details via multiple branches of encoder-decoder convolutional neural networks. Concentric patches at multiple resolutions with different fields of view, feed different branches of HookNet, and intermediate representations are combined via a hooking mechanism. We describe a framework to design and train HookNet for achieving high-resolution semantic segmentation and introduce constraints to guarantee pixel-wise alignment in feature maps during hooking. We show the advantages of using HookNet in two histopathology image segmentation tasks where tissue type prediction accuracy strongly depends on contextual information, namely (1) multi-class tissue segmentation in breast cancer and, (2) segmentation of tertiary lymphoid structures and germinal centers in lung cancer. We show the superiority of HookNet when compared with single-resolution U-Net models working at different resolutions as well as with a recently published multi-resolution model for histopathology image segmentation. We have made HookNet publicly available by releasing the source code 1 1https://github.com/computationalpathologygroup/hooknet as well as in the form of web-based applications 2 2https://grand-challenge.org/algorithms/hooknet-breast/., 3 3https://grand-challenge.org/algorithms/hooknet-lung/. based on the grand-challenge.org platform.

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know