PlumX Metrics
Embed PlumX Metrics

3d Face Reconstruction And Emotion Analytics With Part-Based Morphable Models

2018
  • 0
    Citations
  • 393
    Usage
  • 0
    Captures
  • 0
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

Thesis / Dissertation Description

3D face reconstruction and facial expression analytics using 3D facial data are newand hot research topics in computer graphics and computer vision. In this proposal, we firstreview the background knowledge for emotion analytics using 3D morphable face model, includinggeometry feature-based methods, statistic model-based methods and more advanceddeep learning-bade methods. Then, we introduce a novel 3D face modeling and reconstructionsolution that robustly and accurately acquires 3D face models from a couple of imagescaptured by a single smartphone camera. Two selfie photos of a subject taken from thefront and side are used to guide our Non-Negative Matrix Factorization (NMF) inducedpart-based face model to iteratively reconstruct an initial 3D face of the subject. Then, aniterative detail updating method is applied to the initial generated 3D face to reconstructfacial details through optimizing lighting parameters and local depths. Our iterative 3Dface reconstruction method permits fully automatic registration of a part-based face representationto the acquired face data and the detailed 2D/3D features to build a high-quality3D face model. The NMF part-based face representation learned from a 3D face databasefacilitates effective global and adaptive local detail data fitting alternatively. Our systemis flexible and it allows users to conduct the capture in any uncontrolled environment. Wedemonstrate the capability of our method by allowing users to capture and reconstruct their3D faces by themselves.Based on the 3D face model reconstruction, we can analyze the facial expression andthe related emotion in 3D space. We present a novel approach to analyze the facial expressionsfrom images and a quantitative information visualization scheme for exploring thistype of visual data. From the reconstructed result using NMF part-based morphable 3D facemodel, basis parameters and a displacement map are extracted as features for facial emotionanalysis and visualization. Based upon the features, two Support Vector Regressions (SVRs)are trained to determine the fuzzy Valence-Arousal (VA) values to quantify the emotions.The continuously changing emotion status can be intuitively analyzed by visualizing theVA values in VA-space. Our emotion analysis and visualization system, based on 3D NMFmorphable face model, detects expressions robustly from various head poses, face sizes andlighting conditions, and is fully automatic to compute the VA values from images or a sequenceof video with various facial expressions. To evaluate our novel method, we test oursystem on publicly available databases and evaluate the emotion analysis and visualizationresults. We also apply our method to quantifying emotion changes during motivational interviews.These experiments and applications demonstrate effectiveness and accuracy ofour method.In order to improve the expression recognition accuracy, we present a facial expressionrecognition approach with 3D Mesh Convolutional Neural Network (3DMCNN) and a visualanalytics guided 3DMCNN design and optimization scheme. The geometric properties of thesurface is computed using the 3D face model of a subject with facial expressions. Instead ofusing regular Convolutional Neural Network (CNN) to learn intensities of the facial images,we convolve the geometric properties on the surface of the 3D model using 3DMCNN. Wedesign a geodesic distance-based convolution method to overcome the difficulties raised fromthe irregular sampling of the face surface mesh. We further present an interactive visualanalytics for the purpose of designing and modifying the networks to analyze the learnedfeatures and cluster similar nodes in 3DMCNN. By removing low activity nodes in the network,the performance of the network is greatly improved. We compare our method with the regular CNN-based method by interactively visualizing each layer of the networks andanalyze the effectiveness of our method by studying representative cases. Testing on publicdatasets, our method achieves a higher recognition accuracy than traditional image-basedCNN and other 3D CNNs. The presented framework, including 3DMCNN and interactivevisual analytics of the CNN, can be extended to other applications.

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know