Normal view MARC view ISBD view

Face processing and applications to distance learning

By: Le, Vuong
Title By: Khorrami, Pooya | Tariq, Usman | Tang, Hao | Huang, Thomas
Material type: BookPublisher: New Jersey : World Scientific, c2016.Description: xi, 126 p. : ill. ; 24 cm.ISBN: 9789814733021Subject(s): Optical pattern recognition | Image processing | Face perceptionDDC classification: 006.4/2 Online resources: Location Map
Tags from this library: No tags from this library for this title. Log in to add tags.
    average rating: 0.0 (0 votes)
Item type Home library Call number Status Date due Barcode Item holds
REGULAR University of Wollongong in Dubai
Main Collection
006.42 LE FA (Browse shelf) Available T0011190
Total holds: 0

Includes bibliographical references (pages 109-120) and index.

Machine generated contents note: 1.1.Motivation 1.2.Overview 2.1.Introduction 2.2.Related Work 2.3.Image Features 2.3.1.Mid-Level Features 2.4.Supervised Image Descriptor Encoding 2.4.1.Supervised Soft Vector Quantization (SSVQ) 2.4.2.Multi-class SSVQ 2.4.3.Supervised Super-Vector Encoding (SSE) 2.5.Databases 2.5.1.Binghamton University 3D Facial Expression (BU-3DFE) Database 2.5.2.CMU Multi-PIE 2.6.Experiments and Discussion 2.7.Concluding Remarks 3.1.Introduction 3.2.3D Face Model Reconstruction from 2D Images 3.2.1.Related Work 3.2.2.3D Reconstruction from a Single Image of Arbitrary View 3.2.3.3D Reconstruction from Stereo Images 3.2.4.Experiments and Evaluation 3.3.3D Face Model Tracking from 2D Videos 3.3.1.Introduction 3.3.2.Literature Review 3.3.3.Real Time 3D Face Tracking from 2D Videos 3.3.4.Experimental Results 3.4.3D Face Modeling from RGB-D Images 3.4.1.Introduction Contents note continued: 3.4.2.3D Face Model Fitting with RGB-D Signal 3.4.3.3D Face Model Tracking with RGB-D Signal 3.5.Conclusions 4.1.Introduction 4.2.Previous Work 4.2.1.Feature-based Methods 4.2.2.Appearance-based Methods 4.2.3.Geometric Model-based Methods 4.2.4.Visible Light Methods 4.3.Eye Gaze Estimation Using 3D Models 4.3.1.3D Face Model Tracking 4.3.2.Gaze Direction Estimation 4.4.Experiments 4.5.Conclusions 5.1.Introduction 5.2.Related Work 5.2.1.Expressive Speech Synthesis 5.2.2.3D Face Modeling and Animation 5.2.3.Co-articulation of Lip Motion and Facial Expressions 5.3.System Framework and Methods 5.3.1.System Framework 5.3.2.3D Face Modeling 5.3.3.3D Face Animation 5.3.4.Expressive Speech Synthesis 5.4.Evaluations 5.5.Conclusions and Future Work 6.1.Introduction 6.2.Animation Model Construction 6.3.Performance-driven Animations 6.4.3D Model-based Video Coding Contents note continued: 6.5.Experimental Results 7.1.Introduction 7.2.Related Work 7.3.System Overview 7.4.Engagement Estimation 7.4.1.Gaze Direction 7.4.2.Emotion Recognition 7.5.Experiments 7.5.1.Qualitative Evaluation 7.5.2.Quantitative Evaluation 7.6.Conclusions and Future Work 8.1.Robust Visual Tracking and Motion Model 8.2.Client-cloud Architecture 8.3.Residual Compensation for Photorealistic Video Coding.

Powered by Koha