Deep Semantic Matching with Foreground Detection and Cycle-Consistency

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Authors Yun-Chun Chen, Po-Hsiang Huang, Li-Yu Yu, Jia-Bin Huang, Ming-Hsuan Yang, and Yen-Yu Lin
Paper Category
Paper Abstract Establishing dense semantic correspondences between objectinstances remains a challenging problem due to background clutter, sig-nificant scale and pose differences, and large intra-class variations. In thispaper, we present an end-to-end trainable network for learning semanticcorrespondences using only matching image pairs without manual key-point correspondence annotations. To facilitate network training withthis weaker form of supervision, we 1) explicitly estimate the foregroundregions to suppress the effect of background clutter and 2) develop cycle-consistent losses to enforce the predicted transformations across multi-ple images to be geometrically plausible and consistent. We train theproposed model using the PF-PASCAL dataset and evaluate the perfor-mance on the PF-PASCAL, PF-WILLOW, and TSS datasets. Extensiveexperimental results show that the proposed approach achieves favorablyperformance compared to the state-of-the-art.
Date of publication 2018
Code Programming Language Python

Copyright Researcher 2022