## BibTeX entry

```
@PHDTHESIS{200805Dacheng_Tao,
AUTHOR={Dacheng Tao},
TITLE={Discriminative Linear and Multilinear Subspace Methods},
SCHOOL={University of London},
MONTH=May,
YEAR=2008,
URL={http://www.bmva.org/theses/2008/2008-tao.pdf},
}
```

## Abstract

Linear discriminant analysis (LDA) sheds light on classification tasks in computer vision. However, classification based on LDA can perform poorly in applications because LDA has: 1) the heteroscedastic problem, 2) the multimodal problem, 3) the class separation problem, and 4) the small sample size (SSS) problem. In this thesis, the first three problems are called the model based problems because they arise from the definition of LDA. The fourth problem arises when there are too few training samples. The SSS problem is also known as the overfitting problem. To address the model based problems, a new criterion is proposed: maximization of the geometric mean of the Kullback–Leibler (KL) divergences and the normalized KL divergences for subspace selection when samples are sampled from Gaussian mixture models. The new criterion reduces all model based problems significantly, as shown by a large number of empirical studies. To address the SSS problem in LDA, a general tensor discriminant analysis (GTDA) is developed. GTDA makes better use of the structure information of the objects in vision research. GTDA is a multilinear extension of a modified LDA. It involves the estimation of a series of projection matrices in projecting an object in the form of a tensor from a high dimensional feature space to a low dimensional feature space. Experiments on human gait recognition demonstrate that GTDA combined with LDA and nearest neighbor rule outperforms competing methods. Based on the work above, the standard convex optimization based approach to machine learning is generalized to the supervised tensor learning (STL) framework, in which tensors are accepted as input. The solution to STL is obtained in practice using an alternating projection algorithm. This generalization reduces the overfitting problem when there are only a few training samples. An empirical study confirms that the overfitting is reduced.