Unsupervised Features for Facial Expression Intensity Estimation over Time

Computer Vision and Pattern Recognition Workshops (CVPRW), June 2018.

Maren Awiszus, Stella Graßhof, Felix Kuhnke and Jörn Ostermann


The diversity of facial shapes and motions among persons is one of the greatest challenges for automatic analysis of facial expressions. In this paper, we propose a feature describing expression intensity over time, while being invariant to person and the type of performed expression. Our feature is a weighted combination of the dynamics of multiple points adapted to the overall expression trajectory. We evaluate our method on several tasks all related to temporal analysis of facial expression. The proposed feature is compared to a state-of-the-art method for expression intensity estimation, which it outperforms. We use our proposed feature to temporally align multiple sequences of recorded 3D facial expressions. Furthermore, we show how our feature can be used to reveal person-specific differences in performances of facial expressions. Additionally, we apply our feature to identify the local changes in face video sequences based on action unit labels. For all the experiments our feature proves to be robust against noise and outliers, making it applicable to a variety of applications for analysis of facial movements.

Alignment for BU4DFE

Temporal Alignment Information for the BU4DFE database will be uploaded soon. If you are interested in a preliminary version, please contact

Project page

See our project page for further details about our work in analysis and synthesis of faces.