Real-time multimodal interaction in virtual reality - a case study with a large virtual interface
Multimedia Tools and Applications, ISSN: 1573-7721, Vol: 82, Issue: 16, Page: 25427-25448
2023
- 11Citations
- 22Captures
Metric Options: Counts1 Year3 YearSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Article Description
The values of VR and multimodal interaction technologies offer creative, virtual alternatives to manipulate a large data set in a virtual environment. This work presents the design, implementation, and evaluation of a real-time multimodal interaction framework that enables users to navigate, select, and move data elements. The novel multimodal fusion method is able to recognize freehand gestures, voice commands, and head gaze pointer in real-time and fuse them to meaningful actions for interacting with the virtual environment. We worked with imagery analysts who were defense and security experts on designing and testing the interface and interaction modalities. The evaluation of the framework was conducted with a case study of photo management tasks based on a real-world scenario. Users are able to select photos in a large virtual interface and move them to the bins on the left and right sides of the main view. The evaluation focuses on performance, task completion time, and users’ experience amongst several different combinations of input modalities. The evaluation result shows it is important to make multiple interaction modalities available to users, and the interaction design implications are concluded based on the evaluation.
Bibliographic Details
Springer Science and Business Media LLC
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know