Matterport3D: Learning from RGB-D Data in Indoor Environments

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Andy Zeng, Thomas Funkhouser, Manolis Savva, Shuran Song, Matthias Nie├čner, Angela Dai, Yinda Zhang, Maciej Halber, Angel Chang
Journal/Conference Name Proceedings - 2017 International Conference on 3D Vision, 3DV 2017
Paper Category
Paper Abstract Access to large, diverse RGB-D datasets is critical for training RGB-D scene understanding algorithms. However, existing datasets still cover only a limited number of views or a restricted scale of spaces. In this paper, we introduce Matterport3D, a large-scale RGB-D dataset containing 10,800 panoramic views from 194,400 RGB-D images of 90 building-scale scenes. Annotations are provided with surface reconstructions, camera poses, and 2D and 3D semantic segmentations. The precise global alignment and comprehensive, diverse panoramic set of views over entire buildings enable a variety of supervised and self-supervised computer vision tasks, including keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and region classification.
Date of publication 2017
Code Programming Language C++
Comment

Copyright Researcher 2022