Semantic segmentation of large-scale point cloud scenes via dual neighborhood feature and global spatial-aware
International Journal of Applied Earth Observation and Geoinformation, ISSN: 1569-8432, Vol: 129, Page: 103862
2024
- 3Citations
- 7Captures
Metric Options: Counts1 Year3 YearSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Article Description
As a core task in 3D scene information extraction, point cloud semantic segmentation is crucial for understanding 3D scenes and environmental perception. While extracting local geometric structural features from point clouds, existing research often overlooks the long-range dependencies present in the scene, making it challenging to fully uncover the long-range contextual features hidden within point clouds. On this basis, we propose a segmentation algorithm (DG-Net) that integrates dual neighborhood features with global spatial-aware. Initially, the local structure information encoding module is designed to learn about local geometric shapes by encoding spatial position and directional features, thus supplementing structural information. Subsequently, a dual neighborhood features complementary module is introduced to merge the geometric structural and semantic features within local neighborhoods, learning local dependencies and capturing distinguishable local contextual features. Finally, these features are relayed to a global spatial-aware module equipped with a gated unit, which dynamically adjusts the weights of features at different stages, effectively modeling long-range dependencies between local structures and finely extracting long-range contextual features. We conducted experiments on benchmark datasets of point cloud scenes, and both quantitative and qualitative results demonstrate that our algorithm can accurately identify small-scale objects with complex geometric structures within scenes, surpassing other mainstream networks in segmentation performance. The mIoU on the S3DIS, Toronto3D, and SensatUrban datasets are 71.9 %, 82.1 %, and 59.8 %, respectively.
Bibliographic Details
http://www.sciencedirect.com/science/article/pii/S1569843224002164; http://dx.doi.org/10.1016/j.jag.2024.103862; http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85191330161&origin=inward; https://linkinghub.elsevier.com/retrieve/pii/S1569843224002164; https://dx.doi.org/10.1016/j.jag.2024.103862
Elsevier BV
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know