Image Cosegmentation via Multi-task Learning
In Proceedings British Machine Vision Conference 2014
AbstractImage segmentation has been studied by computer vision researchers for decades and still remains a challenging task. One major difficulty results from the differences between the foreground object and background could be very ambiguous, especially when prior knowledge is missing. To overcome this difficulty, cosegmentation is proposed, where a set of image, which are assumed to share common foreground objects, are segmented simultaneously. Different Models have been proposed for exploring the prior of the common foreground objects. In this paper, we propose to formulate the image cosegmentaion problem under multi-task learning framework, where segmenting each image is viewed as one task and the prior that common object is shared in the images is modeled as the intrinsic relatedness among the tasks. Compared with the existing methods, the proposed method is able to simultaneously segmenting more than two images and has low computational cost. The proposed method is evaluated on two common datasets, CMU iCoseg dataset and MSRC dataset, with comparisons to existing methods. In addition, we analysis and compare three types of multi-task learning frameworks. The experiment results demonstrate the effectiveness of the proposed method.
FilesExtended Abstract (PDF, 1 page, 675K)
Paper (PDF, 13 pages, 1.1M)