BioFace-3D: Continuous 3D Facial Reconstruction Through Lightweight Single-ear Biosensors

Abstract

Over the last decade, facial landmark tracking and 3D reconstruc-tion have gained considerable attention due to their numerous applications such as human-computer interactions, facial expres-sion analysis, and emotion recognition, etc. Traditional approaches require users to be confined to a particular location and face acamera under constrained recording conditions (e.g., without occlusions and under good lighting conditions). This highly restricted setting prevents them from being deployed in many application scenarios involving human motions. In this paper, we propose the first single-earpiece lightweight biosensing system,BioFace-3D, that can unobtrusively, continuously, and reliably sense theentire facial movements, track 2D facial landmarks, and further render 3D facial animations. Our single-earpiece biosensing system takes advantage of the cross-modal transfer learning model to transfer the knowledge embodied in a high-grade visual facial landmark detection model to the low-grade biosignal domain. After training, our BioFace-3D can directly perform continuous 3Dfacial reconstruction from the biosignals, without any visual input. Without requiring a camera positioned in front of the user, this paradigm shift from visual sensing to biosensing would introduce newo pportunities in many emerging mobile and IoT applications. Extensive experiments involving 16 participants under various settings demonstrate that BioFace-3D can accurately track 53 major facial landmarks with only 1.85mm average error and 3.38% normalized mean error, which is comparable with most state-of-the-art camera-based solutions. The rendered 3D facial animations, which are inconsistency with the real human facial movements, also validatethe system’s capability in continuous 3D facial reconstruction.

Publication
In The 27th Annual International Conference On Mobile Computing And Networking