Fusion of Depth and Thermal Imaging for People Detection
Journal of Telecommunications and Information Technology, ISSN: 1899-8852, Vol: 4, Issue: 4, Page: 53-60
2021
- 5Citations
- 1Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Article Description
The methodology presented in this paper covers the topic of automatic detection of humans based on two types of images that do not rely on the visible light spectrum, namely on thermal and depth images. Various scenarios are consid- ered with the use of deep neural networks being extensions of Faster R-CNN models. Apart from detecting people, in- dependently, with the use of depth and thermal images, we proposed two data fusion methods. The first approach is the early fusion method with a 2-channel compound input. As it turned out, its performance surpassed that of all other meth- ods tested. However, this approach requires that the model be trained on a dataset containing both types of spatially and temporally synchronized imaging sources. If such a training environment cannot be setup or if the specified dataset is not sufficiently large, we recommend the late fusion scenario, i.e. the other approach explored in this paper. Late fusion mod- els can be trained with single-source data. We introduce the dual-NMS method for fusing the depth and thermal imaging approaches, as its results are better than those achieved by the common NMS.
Bibliographic Details
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know