We demonstrate a novel method for producing a synthetic talking head. The method is based on earlier work in which the behaviour of a synthetic individual is generated by reference to a probabilistic model of interactive behaviour within the visual domain - such models are learnt automatically from typical interactions. We extend this work into a combined visual and auditory domain and employ a state-of-the-art facial appearance model. The result is a synthetic talking head that responds appropriately and with correct timing to simple forms of greeting with variations in facial expression and intonation.
Home Contents Author index Keyword index
This document produced for BMVC 2001