Identifying basic emotions like anger, disgust, fear, happiness, sadness, and surprise from pictures is not a new task. For more than 30 years, psychologists have tried to understand pictures of facial expressions. Nowadays, using information technology (IT) tools, researchers are able to apply different procedures to understand emotions from generated images and natural pictures.
This paper is not about face identification in general, but about identifying basic emotions from synthetic and natural faces. The most important result shows that there is no significant difference in the identification of static and dynamic expressions from natural faces. However, Kätsyri and Sams identify an improved accuracy for synthetic dynamic expressions, when compared with static images:
[The] dynamics [do] not improve the identification of already distinctive static facial displays... [but have] an important role for identifying subtle emotional expressions.
Kätsyri and Sams provide arguments for the reported results, using a mixed-design analysis of variance (ANOVA) methodology for factors such as static and dynamic displays; natural types (the Cohn-Kanade facial expression collection, facial expressions recorded at the Helsinki University of Technology, and the Ekman-Friesen collection) and synthetic types (talking-heads computer images); and expressions (the six emotions mentioned above). They present the results of a Toronto Alexithymia Scale (TAS) questionnaire: 20 self-reporting questions applied to 54 participants having either normal or corrected vision. The Facial Action Coding System (FACS) is used throughout the experiment. The paper is presented in four well-organized sections.
The results of this investigation are important for increasing the speed of different software-based applications, including real-time applications. I appreciated the clear presentation and the scientific methodology applied. The references are appropriate and illustrate the state of the art in this field.