Robotic grasp pose detection using deep learning
Abstract
Recent advancements in Deep Learning have accelerated the capabilities of robotic systems in terms of visual perception, object manipulation, automated navigation, and human-robot collaboration. This paper proposes the use of a transfer learning technique with deep convolutional neural networks to learn how to visually identify the grasping configurations for a parallel plate gripper that will be used to grasp various household objects. The Red-Green-Blue-Depth (RGB-D) data from the Cornell Grasp Dataset is used to train the network model using an end-to-end learning method. With this method, we achieve a grasping configuration prediction accuracy of 93.91%.
RAS ID
28394
Document Type
Conference Proceeding
Date of Publication
2018
School
School of Engineering
Copyright
subscription content
Publisher
Institute of Electrical and Electronics Engineers Inc.
Comments
Caldera, S., Rassau, A., & Chai, D. (2018, November). Robotic Grasp Pose Detection Using Deep Learning. In 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV) (pp. 1966-1972). IEEE. Available here