CPQNet: Contact Points Quality Network for Robotic Grasping
IEEE International Conference on Intelligent Robots and Systems, ISSN: 2153-0866, Vol: 2022-October, Page: 5981-5986
2022
- 11Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Captures11
- Readers11
- 11
Conference Paper Description
In typical data-based grasping methods, a grasp based on parallel-jaw grippers is parameterized by the center of the gripper, the rotation angle, and the gripper opening width so as to predict the quality and pose of grasps at every pixel. In contrast, a grasp is represented using only two contact points for contact-points-based grasp representation, which allows for fusion with tactile sensors more naturally. In this work, we propose a method using contact-points-based grasp representation to get a robust grasp using only one contact points quality map generated by a neural network, which significantly reduces the complexity of the network with fewer parameters. We provide a synthetic dataset including depth image and contact points quality map generated by thousands of 3D models. We also provide the method for data generation, which can be used for contact-points-based multi-fingers grasp. Experiments show that contact points quality network can plan an available grasp in 0.15 seconds. The grasping success rate for unknown household objects is 94%. Our method is also available for deformable objects with a success rate of 95%. The dataset and reference code can be found on the project website: https://sites.google.com/view/cpqnet.
Bibliographic Details
Institute of Electrical and Electronics Engineers (IEEE)
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know