The British Machine Vision Association and Society for Pattern Recognition 

BibTeX entry

  AUTHOR={David Cristinacce},
  TITLE={Automatic Detection of Facial Features in Grey Scale Images},
  SCHOOL={University of Manchester},


Accurate localisation of faces and facial features within grey scale images is a challenging task due to the high variability, in both shape and texture, of the appearance of the human face. This thesis investigates methods of combining shape modelling techniques and texture based pattern recognition to reliably and accurately detect facial features, such as the eye pupils, nostrils and mouth corners. Individual feature detectors designed to find specific facial features, e.g. the right mouth corner, are found to be unreliable. The lack of distinctive local image structure around many facial features results in many false matches. Local variation in appearance due to expression, blinking or occlusion may mean the true feature is not detected at all. These problems are addressed in two ways. Firstly, a coarse-to-fine approach is adopted to find the whole face and restrict the search region for individual features. Secondly, shape information is used to select the most likely looking group of candidate features that form a plausible face shape. Three methods of combining shape and feature detection are presented. All three methods are found to give superior performance, compared to merely selecting the best match by each feature detector. The best performing shape constrained feature detection method is compared with the well known Active Appearance Model (AAM) approach. Shape constrained feature detection is found to outperform the basic AAM algorithm. However, a recent variation of the AAM which is tuned to edge and corner features is found to give similar results to shape constrained feature detection. The most accurate feature detection performance is achieved using a hybrid approach. This uses shape constrained feature detection to predict initial feature points. These feature points are then refined using edge/corner AAM search. This method is found to be comparable with the accuracy of human annotation.