Details
Original language | English |
---|---|
Pages (from-to) | 37-44 |
Number of pages | 8 |
Journal | Journal on Multimodal User Interfaces |
Volume | 5 |
Publication status | Published - 29 Oct 2011 |
Abstract
In this paper, we present an image-based talking head system that is able to synthesize flexible head motion and realistic facial expression accompanying speech, given arbitrary text input and control tags. The goal of facial animation synthesis is to generate lip synchronized and natural animations. The talking head is evaluated objectively and subjectively. The objective measurement is to measure lip synchronization by matching the closures between the synthesized sequences and the real ones, since human viewers are very sensitive to closures, and get the closures at the right time may be the most important objective criterion for providing the impression that lips and sound are synchronized. In subjective tests, facial expression is evaluated by scoring the real and synthesized videos. Head movement is evaluated by scoring the animation with flexible head motion and the animation with repeated head motion. Experimental results show that the proposed objective measurement of lip closure is one of the most significant criteria for subjective evaluation of animations. The animated facial expressions are indistinguishable from real ones subjectively. Furthermore, talking heads with flexible head motion is more realistic and lifelike than the ones with repeated head motion.
Keywords
- Facial expression, Head motion, Objective and subjective evaluation, Talking head
ASJC Scopus subject areas
- Computer Science(all)
- Signal Processing
- Computer Science(all)
- Human-Computer Interaction
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: Journal on Multimodal User Interfaces, Vol. 5, 29.10.2011, p. 37-44.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Evaluation of an image-based talking head with realistic facial expression and head motion
AU - Liu, Kang
AU - Ostermann, Joern
N1 - Funding information: This work is funded by German Research Society (DFG Sachebeihilfe OS295/3-1). This work has been partially supported by EC within FP6 under Grant 511568 with the acronym 3DTV.
PY - 2011/10/29
Y1 - 2011/10/29
N2 - In this paper, we present an image-based talking head system that is able to synthesize flexible head motion and realistic facial expression accompanying speech, given arbitrary text input and control tags. The goal of facial animation synthesis is to generate lip synchronized and natural animations. The talking head is evaluated objectively and subjectively. The objective measurement is to measure lip synchronization by matching the closures between the synthesized sequences and the real ones, since human viewers are very sensitive to closures, and get the closures at the right time may be the most important objective criterion for providing the impression that lips and sound are synchronized. In subjective tests, facial expression is evaluated by scoring the real and synthesized videos. Head movement is evaluated by scoring the animation with flexible head motion and the animation with repeated head motion. Experimental results show that the proposed objective measurement of lip closure is one of the most significant criteria for subjective evaluation of animations. The animated facial expressions are indistinguishable from real ones subjectively. Furthermore, talking heads with flexible head motion is more realistic and lifelike than the ones with repeated head motion.
AB - In this paper, we present an image-based talking head system that is able to synthesize flexible head motion and realistic facial expression accompanying speech, given arbitrary text input and control tags. The goal of facial animation synthesis is to generate lip synchronized and natural animations. The talking head is evaluated objectively and subjectively. The objective measurement is to measure lip synchronization by matching the closures between the synthesized sequences and the real ones, since human viewers are very sensitive to closures, and get the closures at the right time may be the most important objective criterion for providing the impression that lips and sound are synchronized. In subjective tests, facial expression is evaluated by scoring the real and synthesized videos. Head movement is evaluated by scoring the animation with flexible head motion and the animation with repeated head motion. Experimental results show that the proposed objective measurement of lip closure is one of the most significant criteria for subjective evaluation of animations. The animated facial expressions are indistinguishable from real ones subjectively. Furthermore, talking heads with flexible head motion is more realistic and lifelike than the ones with repeated head motion.
KW - Facial expression
KW - Head motion
KW - Objective and subjective evaluation
KW - Talking head
UR - http://www.scopus.com/inward/record.url?scp=84858439723&partnerID=8YFLogxK
U2 - 10.1007/s12193-011-0070-8
DO - 10.1007/s12193-011-0070-8
M3 - Article
AN - SCOPUS:84858439723
VL - 5
SP - 37
EP - 44
JO - Journal on Multimodal User Interfaces
JF - Journal on Multimodal User Interfaces
SN - 1783-7677
ER -