TY - GEN
T1 - CMG-Net
T2 - 2023 IEEE International Conference on Robotics and Automation, ICRA 2023
AU - Wei, Mingze
AU - Huang, Yaomin
AU - Xu, Zhiyuan
AU - Liu, Ning
AU - Che, Zhengping
AU - Zhang, Xinyu
AU - Shen, Chaomin
AU - Feng, Feifei
AU - Shan, Chun
AU - Tang, Jian
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - In this paper, we propose a novel representation for grasping using contacts between multi-finger robotic hands and objects to be manipulated. This representation significantly reduces the prediction dimensions and accelerates the learning process. We present an effective end-to-end network, CMG-Net, for grasping unknown objects in a cluttered environment by efficiently predicting multi-finger grasp poses and hand configurations from a single-shot point cloud. Moreover, we create a synthetic grasp dataset that consists of five thousand cluttered scenes, 80 object categories, and 20 million annotations. We perform a comprehensive empirical study and demonstrate the effectiveness of our grasping representation and CMG-Net. Our work significantly outperforms the state-of-the-art for three-finger robotic hands. We also demonstrate that the model trained using synthetic data perform very well for real robots.
AB - In this paper, we propose a novel representation for grasping using contacts between multi-finger robotic hands and objects to be manipulated. This representation significantly reduces the prediction dimensions and accelerates the learning process. We present an effective end-to-end network, CMG-Net, for grasping unknown objects in a cluttered environment by efficiently predicting multi-finger grasp poses and hand configurations from a single-shot point cloud. Moreover, we create a synthetic grasp dataset that consists of five thousand cluttered scenes, 80 object categories, and 20 million annotations. We perform a comprehensive empirical study and demonstrate the effectiveness of our grasping representation and CMG-Net. Our work significantly outperforms the state-of-the-art for three-finger robotic hands. We also demonstrate that the model trained using synthetic data perform very well for real robots.
UR - https://www.scopus.com/pages/publications/85168694476
U2 - 10.1109/ICRA48891.2023.10161481
DO - 10.1109/ICRA48891.2023.10161481
M3 - 会议稿件
AN - SCOPUS:85168694476
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 9125
EP - 9131
BT - Proceedings - ICRA 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 29 May 2023 through 2 June 2023
ER -