July 2010 Theme:
Digital Human Faces
Guest Editors' Introduction by Catherine Pelachaud and Tamy Boubekeur

Faces are an important communication vector. Through facial expressions, gaze behaviors, and head movements, faces convey information on not only a person’s emotional state and attitudes, but also on discursive, pragmatic, and syntactic elements. Expressions result from subtle muscular contractions and wrinkle formations, and we perceive them through a complex filter of subsurface scattering and other nontrivial light reflections. Lately, modeling 3D faces and their expressions has generated a great deal of interest. Researchers cover automatic or interactive generation of 3D geometry as well as rendering and animation techniques.

This research has many applications. One type of application relates to the creation and animation of virtual actors for films and video games. New rendering techniques ensure highly realistic skin models. Animators apply motion capture with or without markers to animate the body and the face. The quality can be precise enough to capture a real actor’s performances as well as the slightest movements in emotional expressions.

Another type involves the creation of autonomous agents, particularly embodied conversation agents (ECAs)—autonomous entities with communicative and emotional capabilities. ECAs serve as Web assistants, pedagogical agents, or even companions. Researchers have proposed models to specify and control ECAs’ behaviors.

This month’s theme covers the broad field of 3D faces and how they’re created, rendered, and animated. Moreover, we aim to emphasize the excellent research coming out of the computer graphics and ECA communities.

Selected Articles for Digital Human Faces

In “The Digital Emily Project: Achieving a Photoreal Digital Actor,” (login required for full text) Oleg Alexander and his colleagues describe a state-of-the-art face performance capture system that produces astonishing visual output. Their system combines some of the most recent capture, rigging, and compositing techniques that provide production-quality special effects. Based on the Light Stage 5 system, their method provides high-resolution animated face geometry, together with accurate measures for specular and subsurface albedo and normals. As a result, this system produces photorealistic animated faces that often cross the well-known “uncanny valley.”

Real-time applications such as games also need realistic faces to provide an immersive experience and convey characters’ emotions. In “Real-Time Realistic Skin Translucency,” (login required for full text) Jorge Jimenez and his colleagues propose a scalable approximation method for subsurface scattering. This method produces realistic translucency effects at a high frame rate, requiring minimal extra cost compared to conventional rendering. The final rendering quality approaches that of precomputed pictures and points in an exciting way toward photorealistic real-time rendering of natural shapes.

Again dealing with real-time applications and geared toward an even larger set of applications, Liang Hu and his colleagues propose an efficient GPU implementation of the popular progressive mesh representation in “Parallel View-Dependent Level-of-Detail Control.” (login required for full text) This algorithm provides a fine-grained parallel update of the displayed level-of-detail, which can be used for adaptive rendering of high-resolution, organic objects, such as faces and other shapes.

In “The Expressive Gaze Model: Using Gaze to Express Emotion,” (login required for full text) Brent Lance and Stacy Marsella present a model encompassing head, torso, and eye movement in a hierarchical fashion. Their Expressive Gaze Model has two main components: a library of Gaze Warping Transformations and a procedural model of eye movement. These components combine motion capture data, procedural animation, physical animation, and even hand-crafted animation. Lance and Marsella conducted an empirical study to determine the mapping between gaze animation models and emotional states.

Nicolas Stoiber and his colleagues discuss a real-time animation system that reproduces the dynamism of facial expressions in “Modeling Short-Term Dynamics and Variability for Realistic Interactive Facial Animation.” (login required for full text) Their system uses motion capture data of expressive facial animations. Rather than considering the face as a whole object, the authors develop several motion models, each controlling a given part of the face. These models are trained on the motion capture data and can learn the dynamic characteristics of the various facial parts. Furthermore, a stochastic component ensures reproduction of the variability of human expressions. The resulting animation looks very natural.

Zhigang Deng and his colleagues discuss another example of a learning algorithm applied to motion capture data in “Expressive Facial Animation Synthesis by Learning Speech Coarticulation and Expression Spaces.” (login required for full text) The algorithm learns expressive speech coarticulation from audio-visual data of an actor speaking sentences with various emotions. The final animation is obtained by blending lip movement from the learned algorithm and facial expressions.

We hope you enjoy reading these articles as much as we did. For further information, take a look at the Related Resources below.


Guest Editors

Catherine Pelachaud is a French National Center for Scientific Research (CNRS) director of research at the Multimedia group in the Signal and Image Processing department at the CNRS Laboratory for Information Processing and Communication, located at Telecom ParisTech. Contact her at catherine.pelachaud@telecom-paristech.fr.

Tamy Boubekeur is an associate professor of computer science leading the Computer Graphics group in the Signal and Image Processing department at the CNRS Laboratory for Information Processing and Communication, located at Telecom ParisTech. Contact him at tamy.boubekeur@telecom-paristech.fr.


Related Resources

Articles and research

(Note: Access to the full text of some articles may require login.)

Tools

Open source Embodied Conversational Agent systems