Multimodal zero-shot learning for tactile texture recognition
Robotics and Autonomous Systems, ISSN: 0921-8890, Vol: 176, Page: 104688
2024
- 8Citations
- 15Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Article Description
Tactile sensing plays an irreplaceable role in robotic material recognition. It enables robots to distinguish material properties such as their local geometry and textures, especially for materials like textiles. However, most tactile recognition methods can only classify known materials that have been touched and trained with tactile data, yet cannot classify unknown materials that are not trained with tactile data. To solve this problem, we propose a tactile Zero-Shot Learning framework to recognise materials when they are touched for the first time, using their visual and semantic information, without requiring tactile training samples. The biggest challenge in tactile Zero-Shot Learning is to recognise disjoint classes between training and test materials, i.e., the test materials that are not among the training ones. To bridge this gap, the visual modality, providing tactile cues from sight, and semantic attributes, giving high-level characteristics, are combined together and act as a link to expose the model to these disjoint classes. Specifically, a generative model is learnt to synthesise tactile features according to corresponding visual images and semantic embeddings, and then a classifier can be trained using the synthesised tactile features for zero-shot recognition. Extensive experiments demonstrate that our proposed multimodal generative model can achieve a high recognition accuracy of 83.06% in classifying materials that were not touched before. The robotic experiment demo and the FabricVST dataset are available at https://sites.google.com/view/multimodalzsl.
Bibliographic Details
http://www.sciencedirect.com/science/article/pii/S092188902400071X; http://dx.doi.org/10.1016/j.robot.2024.104688; http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85189662549&origin=inward; https://linkinghub.elsevier.com/retrieve/pii/S092188902400071X; https://dx.doi.org/10.1016/j.robot.2024.104688
Elsevier BV
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know