Unsupervised Learning of Generative Topic Saliency for Person Re-identification

Hanxiao Wang, Shaogang Gong and Tao Xiang

In Proceedings British Machine Vision Conference 2014


Existing approaches to person re-identification (re-id) are dominated by supervised learning based methods which focus on learning optimal similarity distance metrics. However, supervised learning based models require a large number of manually labelled pairs of person images across every pair of camera views. This thus limits their ability to scale to large camera networks. To overcome this problem, this paper proposes a novel unsupervised re-id modelling approach by exploring generative probabilistic topic modelling. Given abundant unlabelled data, our topic model learns to simultaneously both (1)discover localised person foreground appearance saliency (salient image patches) that are more informative for re-id matching, and (2)remove busy background clutters surrounding a person. Extensive experiments are carried out to demonstrate that the proposed model outperforms existing unsupervised learning re-id methods with significantly simplified model complexity. In the meantime, it still retains comparable re-id accuracy when compared to the state-of-the-art supervised re-id methods but without any need for pair-wise labelled training data.


Poster Session


Extended Abstract (PDF, 1 page, 615K)
Paper (PDF, 11 pages, 1.1M)
Bibtex File


Hanxiao Wang, Shaogang Gong, and Tao Xiang. Unsupervised Learning of Generative Topic Saliency for Person Re-identification. Proceedings of the British Machine Vision Conference. BMVA Press, September 2014.


	title = {Unsupervised Learning of Generative Topic Saliency for Person Re-identification},
	author = {Wang, Hanxiao and Gong, Shaogang and Xiang, Tao},
	year = {2014},
	booktitle = {Proceedings of the British Machine Vision Conference},
	publisher = {BMVA Press},
	editors = {Valstar, Michel and French, Andrew and Pridmore, Tony}
	doi = { http://dx.doi.org/10.5244/C.28.48 }