Adaptive Transductive Transfer Machine
In Proceedings British Machine Vision Conference 2014
AbstractClassification methods traditionally work under the assumption that the training and test sets are sampled from similar distributions (domains). However, when such methods are deployed in practise, the conditions in which test data is acquired do not exactly match those of the training set. In this paper, we exploit the fact that it is often possible to gather unlabeled samples from a test/target domain in order to improve the model built from the training source set. We propose Adaptive Transductive Transfer Machines, which approach this problem by combining four types of adaptation: a lower dimensional space that is shared between the two domains, a set of local transformations to further increase the domain similarity, a classifier parameter adaptation method which modifies the learner for the new domain and a set of class-conditional transformations aiming to increase the similarity between the posterior probability of samples in the source and target sets. We show that our pipeline leads to an improvement over the state-of-the-art in cross-domain image classification datasets, using raw images or basic features.
FilesExtended Abstract (PDF, 1 page, 332K)
Paper (PDF, 12 pages, 386K)