SCNet: Learning Semantic Correspondence

Abstract : This paper addresses the problem of establishing semantic correspondences between images depicting different instances of the same object or scene category. Previous approaches focus on either combining a spatial regular-izer with hand-crafted features, or learning a correspondence model for appearance only. We propose instead a convolutional neural network architecture, called SCNet, for learning a geometrically plausible model for semantic correspondence. SCNet uses region proposals as matching primitives, and explicitly incorporates geometric consistency in its loss function. It is trained on image pairs obtained from the PASCAL VOC 2007 keypoint dataset, and a comparative evaluation on several standard benchmarks demonstrates that the proposed approach substantially out-performs both recent deep learning architectures and previous methods based on hand-crafted features.
Document type :
Conference papers
Liste complète des métadonnées

Cited literature [49 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01576117
Contributor : Rafael Sampaio de Rezende <>
Submitted on : Tuesday, August 22, 2017 - 1:18:45 PM
Last modification on : Thursday, February 7, 2019 - 2:42:21 PM

File

SCNet_ICCV.pdf
Files produced by the author(s)

Identifiers

Citation

Kai Han, Rafael Rezende, Bumsub Ham, Kwan-Yee Wong, Minsu Cho, et al.. SCNet: Learning Semantic Correspondence. ICCV 2017 - International Conference on Computer Vision, Oct 2017, Venise, Italy. pp.1849-1858, ⟨10.1109/ICCV.2017.203⟩. ⟨hal-01576117⟩

Share

Metrics

Record views

1134

Files downloads

682