Enhancing Human Key Point Identification: A Comparative Study of High-Resolution VICON Dataset and COCO Dataset Using BPNET
2023
- 174Usage
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Usage174
- Downloads144
- Abstract Views30
Thesis / Dissertation Description
Accurately identifying human key points is crucial for various applications, including activity recognition, pose estimation, and gait analysis. This study presents a high-resolution dataset created using the VICON motion capture system and three differently oriented 2D cameras, that can be used to train different neural networks for estimating the 2D key joint positions of the person from the 2D images or videos. The participants in the study included 25 healthy adults (17 males and 8 females) performing normal gait movements for about 2 to 3 seconds. The VICON system captured 3D ground truth data, while the three 2D cameras collected images from different perspectives (0°, 45°, and 90°). The dataset was used to train the Body Pose Network (BPNET), a popular neural network model developed by NVIDIA TAO. For comparison, another BPNET model was trained using the COCO 2017 (Common Objects in Context) dataset, a state-of-the-art dataset with more than 118,000 annotated images. Results demonstrate that the proposed dataset achieved significantly higher accuracy compared to the COCO 2017 dataset, despite containing only one-fourth of the number of images than the COCO 2017 dataset. This reduction in data size resulted in improved computational efficiency during model training. Moreover, the proposed dataset's unique focus on gait and its precise prediction of key joint positions during normal gait movements set it apart from other existing datasets.Potential applications of this study include person identification based on gait features, non- invasive detection of player concussions through temporal analysis in sports activities, andidentification of pathologic gait patterns. The proposed dataset shows promise for further accuracy enhancements with the incorporation of additional data.
Bibliographic Details
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know