6-PACK: Category-level 6D Pose Tracker with Anchor-Based Keypoints

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Silvio Savarese, Roberto Martín-Martín, Yuke Zhu, Jun Lv, Chen Wang, Li Fei-Fei, Cewu Lu, Danfei Xu
Journal/Conference Name 2020 IEEE International Conference on Robotics and Automation (ICRA)
Paper Category
Paper Abstract We present 6-PACK, a deep learning approach to category-level 6D object pose tracking on RGB-D data. Our method tracks in real-time novel object instances of known object categories such as bowls, laptops, and mugs. 6-PACK learns to compactly represent an object by a handful of 3D keypoints, based on which the interframe motion of an object instance can be estimated through keypoint matching. These keypoints are learned end-to-end without manual supervision in order to be most effective for tracking. Our experiments show that our method substantially outperforms existing methods on the NOCS category-level 6D pose estimation benchmark and supports a physical robot to perform simple vision-based closed-loop manipulation tasks. Our code and video are available at https//sites.google.com/view/6packtracking.
Date of publication 2019
Code Programming Language Multiple

Copyright Researcher 2022