Classification of Facial Expressions Under Partial Occlusion for VR Games Chapter Conference Paper uri icon

abstract

  • Facial expressions are one of the most common way to externalize our emotions. However, the same emotion can have different effects on the same person and has different effects on different people. Based on this, we developed a system capable of detecting the facial expressions of a person in real-time, occluding the eyes (simulating the use of virtual reality glasses). To estimate the position of the eyes, in order to occlude them, Multi-task Cascade Convolutional Neural Networks (MTCNN) were used. A residual network, a VGG, and the combination of both models, were used to perform the classification of 7 different types of facial expressions (Angry, Disgust, Fear, Happy, Sad, Surprise, Neutral), classifying the occluded and non-occluded dataset. The combination of both models, achieved an accuracy of 64.9% for the occlusion dataset and 62.8% for no occlusion, using the FER-2013 dataset. The primary goal of this work was to evaluate the influence of occlusion, and the results show that the majority of the classification is done with the mouth and chin. Nevertheless, the results were far from the state-of-the-art, which is expect to be improved, mainly by adjusting the MTCNN.

publication date

  • 2022