Unlabelled 3D Motion Examples Improve Cross-View Action Recognition
In Proceedings British Machine Vision Conference 2014
AbstractWe demonstrate a novel strategy for unsupervised cross-view action recognition using multi-view feature synthesis. We do not rely on cross-view video annotations to transfer knowledge across views but use local features generated using motion capture data to learn the feature transformation. Motion capture data allows us to build a correspondence between two synthesized views at the feature level. We learn a feature mapping scheme for each view change by making a naive assumption that all features transform independently. This assumption along with access to exact feature correspondences dramatically simplifies learning. With this learned mapping we are able to �hallucinate� action descriptors corresponding to different viewpoints. This simple approach effectively models the transformation of BoW based action descriptors under viewpoint change and outperforms state of the art on the INRIA IXMAS dataset.
FilesExtended Abstract (PDF, 1 page, 282K)
Paper (PDF, 11 pages, 992K)