Analysis and Impact of Training Set Size in Cross-Subject Human Activity Recognition
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ISSN: 1611-3349, Vol: 14469 LNCS, Page: 391-405
2024
- 1Citations
- 1Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Conference Paper Description
The ubiquity of consumer devices with sensing and computational capabilities, such as smartphones and smartwatches, has increased interest in their use in human activity recognition for healthcare monitoring applications, among others. When developing such a system, researchers rely on input data to train recognition models. In the absence of openly available datasets that meet the model requirements, researchers face a hard and time-consuming process to decide which sensing device to use or how much data needs to be collected. In this paper, we explore the effect of the amount of training data on the performance (i.e., classification accuracy and activity-wise F1-scores) of a CNN model by performing an incremental cross-subject evaluation using data collected from a consumer smartphone and smartwatch. Systematically studying the incremental inclusion of subject data from a set of 22 training subjects, the results show that the model’s performance initially improves significantly with each addition, yet this improvement slows down the larger the number of included subjects. We compare the performance of models based on smartphone and smartwatch data. The latter option is significantly better with smaller sizes of training data, while the former outperforms with larger amounts of training data. In addition, gait-related activities show significantly better results with smartphone-collected data, while non-gait-related activities, such as standing up or sitting down, were better recognized with smartwatch-collected data.
Bibliographic Details
http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85178597153&origin=inward; http://dx.doi.org/10.1007/978-3-031-49018-7_28; https://link.springer.com/10.1007/978-3-031-49018-7_28; https://dx.doi.org/10.1007/978-3-031-49018-7_28; https://link.springer.com/chapter/10.1007/978-3-031-49018-7_28
Springer Science and Business Media LLC
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know