Timezone: »

 
Oral
A probabilistic model for generating realistic lip movements from speech
Gwenn Englebienne · Tim Cootes · Magnus Rattray

Wed Dec 05 04:40 PM -- 05:00 PM (PST) @ None

The present work aims to model the correspondence between facial motion and speech. The face and sound are modelled separately, with phonemes being the link between both. We propose a sequential model and evaluate its suitability for the generation of the facial animation from a sequence of phonemes, which we obtain from speech. We evaluate the results both by computing the error between generated sequences and real video, as well as with a rigorous double-blind test with human sub jects. Experiments show that our model compares favourably to other existing methods and that the sequences generated are comparable to real video sequences.

Author Information

Gwenn Englebienne (University of Amsterdam)
Tim Cootes
Magnus Rattray (The University of Sheffield)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors