Results 11 - 20
of
362
DyRT: Dynamic Response Textures for Real Time Deformation Simulation with Graphics Hardware
, 2002
"... In this paper we describe how to simulate geometrically complex, interactive, physically-based, volumetric, dynamic deformation models with negligible main CPU costs. This is achieved using a Dynamic Response Texture, or DyRT, that can be mapped onto any conventional animation as an optional renderi ..."
Abstract
-
Cited by 96 (13 self)
- Add to MetaCart
(Show Context)
In this paper we describe how to simulate geometrically complex, interactive, physically-based, volumetric, dynamic deformation models with negligible main CPU costs. This is achieved using a Dynamic Response Texture, or DyRT, that can be mapped onto any conventional animation as an optional rendering stage using commodity graphics hardware. The DyRT simulation process employs precomputed modal vibration models excited by rigid body motions. We present several examples, with an emphasis on bone-based character animation for interactive applications.
Automatic determination of facial muscle activations from sparse motion capture marker data
- ACM TRANS. GRAPH. (SIGGRAPH PROC
, 2005
"... We built an anatomically accurate model of facial musculature, passive tissue and underlying skeletal structure using volumetric data acquired from a living male subject. The tissues are endowed with a highly nonlinear constitutive model including controllable anisotropic muscle activations based on ..."
Abstract
-
Cited by 94 (7 self)
- Add to MetaCart
We built an anatomically accurate model of facial musculature, passive tissue and underlying skeletal structure using volumetric data acquired from a living male subject. The tissues are endowed with a highly nonlinear constitutive model including controllable anisotropic muscle activations based on fiber directions. Detailed models of this sort can be difficult to animate requiring complex coordinated stimulation of the underlying musculature. We propose a solution to this problem automatically determining muscle activations that track a sparse set of surface landmarks, e.g. acquired from motion capture marker data. Since the resulting animation is obtained via a three dimensional nonlinear finite element method, we obtain visually plausible and anatomically correct deformations with spatial and temporal coherence that provides robustness against outliers in the motion capture data. Moreover, the obtained muscle activations can be used in a robust simulation framework including contact and collision of the face with external objects.
Multi-Weight Enveloping: Least-Squares Approximation Techniques for Skin Animation
, 2002
"... We present a process called multi-weight enveloping for deforming the skin geometry of the body of a digital creature around its skeleton. It is based on a deformation equation whose coefficients we compute using a statistical fit to an input training exercise. In this input, the skeleton and the sk ..."
Abstract
-
Cited by 85 (0 self)
- Add to MetaCart
We present a process called multi-weight enveloping for deforming the skin geometry of the body of a digital creature around its skeleton. It is based on a deformation equation whose coefficients we compute using a statistical fit to an input training exercise. In this input, the skeleton and the skin move together, by arbitrary external means, through a range of motion representative of what the creature is expected to achieve in practice. The input can also come from existing pieces of handcrafted skin animation. Using a modified least-squares fitting technique, we compute the coefficients, or “weights”, of the deformation equation. The result is that the equation generalizes the skin movement so that it applies well to other sequences of animation. The multi-weight deformation equation is computationally efficient to evaluate; once the training process is complete, even creatures with high levels of geometric detail can move at interactive frames rates with a look that approximates that of anatomical, physically-based models. We demonstrate the technique in a feature film production environment, on a human model whose input poses are sculpted by hand and an animal model whose input poses come from the output of an anatomically-based dynamic simulation.
Construction and Animation of Anatomically Based Human Hand Models
, 2003
"... The human hand is a masterpiece of mechanical complexity, able to perform fine motor manipulations and powerful work alike. Designing an animatable human hand model that features the abilities of the archetype created by Nature requires a great deal of anatomical detail to be modeled. In this pape ..."
Abstract
-
Cited by 81 (2 self)
- Add to MetaCart
The human hand is a masterpiece of mechanical complexity, able to perform fine motor manipulations and powerful work alike. Designing an animatable human hand model that features the abilities of the archetype created by Nature requires a great deal of anatomical detail to be modeled. In this paper, we present a human hand model with underlying anatomical structure. Animation of the hand model is controlled by muscle contraction values. We employ a
Modeling, Tracking and Interactive Animation of Faces and Heads using Input from Video
- IN PROCEEDINGS OF COMPUTER ANIMATION CONFERENCE
, 1996
"... We describe tools that use measurements from video for the extraction of facial modeling and animation parameters, head tracking, and real-time interactive facial animation. These tools share common goals but rely on varying details of physical and geometric modeling and in their input measurement ..."
Abstract
-
Cited by 81 (7 self)
- Add to MetaCart
We describe tools that use measurements from video for the extraction of facial modeling and animation parameters, head tracking, and real-time interactive facial animation. These tools share common goals but rely on varying details of physical and geometric modeling and in their input measurement system. Accurate facial
3D ChainMail: a Fast Algorithm for Deforming Volumetric Objects
, 1996
"... An algorithm is presented that enables fast deformation of volumetric objects. Using this algorithm,... ..."
Abstract
-
Cited by 80 (11 self)
- Add to MetaCart
An algorithm is presented that enables fast deformation of volumetric objects. Using this algorithm,...
MikeTalk: A Talking Facial Display Based on Morphing Visemes
- In Proceedings of the Computer Animation Conference
, 1998
"... We present MikeTalk, a text-to-audiovisual speech synthesizer which converts input text into an audiovisual speech stream. MikeTalk is built using visemes, which are a set of images spanning a large range of mouth shapes. The visemes are acquired from a recorded visual corpus of a human subject whi ..."
Abstract
-
Cited by 76 (8 self)
- Add to MetaCart
(Show Context)
We present MikeTalk, a text-to-audiovisual speech synthesizer which converts input text into an audiovisual speech stream. MikeTalk is built using visemes, which are a set of images spanning a large range of mouth shapes. The visemes are acquired from a recorded visual corpus of a human subject which is specifically designed to elicit one instantiation of each viseme. Using optical flow methods, correspondence from every viseme to every other viseme is computed automatically. By morphing along this correspondence, a smooth transition between viseme images may be generated. A complete visual utterance is constructed by concatenating viseme transitions. Finally, phoneme and timing information extracted from a text-to-speech synthesizer is exploited to determine which viseme transitions to use, and the rate at which the morphing process should occur. In this manner, we are able to synchronize the visual speech stream with the audio speech stream, and hence give the impression of a phot...
Eyes Alive
"... For an animated human face model to appear natural it should produce eye movements consistent with human ocular behavior. During face-to-face conversational interactions, eyes exhibit conversational turn-taking and agent thought processes through gaze direction, saccades, and scan patterns. We have ..."
Abstract
-
Cited by 75 (1 self)
- Add to MetaCart
For an animated human face model to appear natural it should produce eye movements consistent with human ocular behavior. During face-to-face conversational interactions, eyes exhibit conversational turn-taking and agent thought processes through gaze direction, saccades, and scan patterns. We have implemented an eye movement model based on empirical models of saccades and statistical models of eye-tracking data. Face animations using stationary eyes, eyes with random saccades only, and eyes with statistically derived saccades are compared, to evaluate whether they appear natural and effective while communicating.
Regularized Bundle-Adjustment to Model Heads from Image Sequences without Calibration Data
- International Journal of Computer Vision
, 2000
"... We address the structure-from-motion problem in the context of head modeling from video sequences for which calibration data is not available. This task is made challenging by the fact that correspondences are difficult to establish due to lack of texture and that a quasi-euclidean representation ..."
Abstract
-
Cited by 74 (13 self)
- Add to MetaCart
We address the structure-from-motion problem in the context of head modeling from video sequences for which calibration data is not available. This task is made challenging by the fact that correspondences are difficult to establish due to lack of texture and that a quasi-euclidean representation is required for realism.
Visual Speech Synthesis by Morphing Visemes
, 1999
"... We present MikeTalk, a text-to-audiovisual speech synthesizer which converts input text into an audiovisual speech stream. MikeTalk is built using visemes, which are a small set of images spanning a large range of mouth shapes. The visemes are acquired from a recorded visual corpus of a human subjec ..."
Abstract
-
Cited by 72 (9 self)
- Add to MetaCart
We present MikeTalk, a text-to-audiovisual speech synthesizer which converts input text into an audiovisual speech stream. MikeTalk is built using visemes, which are a small set of images spanning a large range of mouth shapes. The visemes are acquired from a recorded visual corpus of a human subject which is specifically designed to elicit one instantiation of each viseme. Using optical flow methods, correspondence from every viseme to every other viseme is computed automatically. By morphing along this correspondence, a smooth transition between viseme images may be generated. A complete visual utterance is constructed by concatenating viseme transitions. Finally, phoneme and timing information extracted from a text-to-speech synthesizer is exploited to determine which viseme transitions to use, and the rate at which the morphing process should occur. In this manner, we are able to synchronize the visual speech stream with the audio speech stream, and hence give the impression of a photorealistic talking face.