Infrared and visible image fusion via salient object extraction and low-light region enhancement
Infrared Physics & Technology, ISSN: 1350-4495, Vol: 124, Page: 104223
2022
- 17Citations
- 5Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Article Description
The infrared and visible image fusion aims to integrate complementary information from the input image to generate an information-rich fused image, which has been widely used to improve the performance of various surveillance systems and high-level vision tasks. In this paper, we propose a novel infrared and visible image fusion via salient object extraction and low-light region enhancement. The proposed method can accurately extract salient objects from the source image and preserve the visual background. For infrared images, the most widely distributed pixels are utilized as seed points to measure intensity saliency. Not only that, since the spatial distribution of objects in images also affects visual attention, we design a central deviation model to measure spatial distribution saliency. The infrared salient object can be extracted by combining the intensity and spatial distribution saliency. For visible images, different from existing fusion methods, the direction uniformity instead of the gradient magnitude is utilized to extract salient objects. Finally, because the fusion task is always performed under low-light conditions or complex environments, we reduce the low-light region to improve the visual quality of visible images and then use the enhanced visible image as the background to reconstruct the fused image. Experimental results demonstrate that the proposed fusion method outperforms nine state-of-the-art image fusion methods in a series of qualitative and quantitative evaluations, and also has excellent visual effects.
Bibliographic Details
http://www.sciencedirect.com/science/article/pii/S1350449522002043; http://dx.doi.org/10.1016/j.infrared.2022.104223; http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85131083337&origin=inward; https://linkinghub.elsevier.com/retrieve/pii/S1350449522002043; https://dx.doi.org/10.1016/j.infrared.2022.104223
Elsevier BV
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know