Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops |
Untertitel | CVPRW 2018 |
Herausgeber (Verlag) | IEEE Computer Society |
Seiten | 1199-1207 |
Seitenumfang | 9 |
ISBN (elektronisch) | 9781538661000 |
Publikationsstatus | Veröffentlicht - 13 Dez. 2018 |
Veranstaltung | 20018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018 - Salt Lake City, USA / Vereinigte Staaten Dauer: 18 Juni 2018 → 23 Juni 2018 |
Publikationsreihe
Name | IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops |
---|---|
Band | 2018-June |
ISSN (Print) | 2160-7508 |
ISSN (elektronisch) | 2160-7516 |
Abstract
The diversity of facial shapes and motions among persons is one of the greatest challenges for automatic analysis of facial expressions. In this paper, we propose a feature describing expression intensity over time, while being invariant to person and the type of performed expression. Our feature is a weighted combination of the dynamics of multiple points adapted to the overall expression trajectory. We evaluate our method on several tasks all related to temporal analysis of facial expression. The proposed feature is compared to a state-of-the-art method for expression intensity estimation, which it outperforms. We use our proposed feature to temporally align multiple sequences of recorded 3D facial expressions. Furthermore, we show how our feature can be used to reveal person-specific differences in performances of facial expressions. Additionally, we apply our feature to identify the local changes in face video sequences based on action unit labels. For all the experiments our feature proves to be robust against noise and outliers, making it applicable to a variety of applications for analysis of facial movements.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Maschinelles Sehen und Mustererkennung
- Ingenieurwesen (insg.)
- Elektrotechnik und Elektronik
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops: CVPRW 2018. IEEE Computer Society, 2018. S. 1199-1207 8575310 (IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops; Band 2018-June).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Unsupervised Features for Facial Expression Intensity Estimation over Time
AU - Awiszus, Maren
AU - Grabhof, Stella
AU - Kuhnke, Felix
AU - Ostermann, Jorn
PY - 2018/12/13
Y1 - 2018/12/13
N2 - The diversity of facial shapes and motions among persons is one of the greatest challenges for automatic analysis of facial expressions. In this paper, we propose a feature describing expression intensity over time, while being invariant to person and the type of performed expression. Our feature is a weighted combination of the dynamics of multiple points adapted to the overall expression trajectory. We evaluate our method on several tasks all related to temporal analysis of facial expression. The proposed feature is compared to a state-of-the-art method for expression intensity estimation, which it outperforms. We use our proposed feature to temporally align multiple sequences of recorded 3D facial expressions. Furthermore, we show how our feature can be used to reveal person-specific differences in performances of facial expressions. Additionally, we apply our feature to identify the local changes in face video sequences based on action unit labels. For all the experiments our feature proves to be robust against noise and outliers, making it applicable to a variety of applications for analysis of facial movements.
AB - The diversity of facial shapes and motions among persons is one of the greatest challenges for automatic analysis of facial expressions. In this paper, we propose a feature describing expression intensity over time, while being invariant to person and the type of performed expression. Our feature is a weighted combination of the dynamics of multiple points adapted to the overall expression trajectory. We evaluate our method on several tasks all related to temporal analysis of facial expression. The proposed feature is compared to a state-of-the-art method for expression intensity estimation, which it outperforms. We use our proposed feature to temporally align multiple sequences of recorded 3D facial expressions. Furthermore, we show how our feature can be used to reveal person-specific differences in performances of facial expressions. Additionally, we apply our feature to identify the local changes in face video sequences based on action unit labels. For all the experiments our feature proves to be robust against noise and outliers, making it applicable to a variety of applications for analysis of facial movements.
UR - http://www.scopus.com/inward/record.url?scp=85060868946&partnerID=8YFLogxK
U2 - arXiv:1805.00780v2
DO - arXiv:1805.00780v2
M3 - Conference contribution
AN - SCOPUS:85060868946
T3 - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
SP - 1199
EP - 1207
BT - Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops
PB - IEEE Computer Society
T2 - 20018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2018
Y2 - 18 June 2018 through 23 June 2018
ER -