Results 1 - 10
of
19
Style-Content Separation by Anisotropic Part Scales
"... We perform co-analysis of a set of man-made 3D objects to allow the creation of novel instances derived from the set. We analyze the objects at the part level and treat the anisotropic part scales as a shape style. The co-analysis then allows style transfer to synthesize new objects. The key to co-a ..."
Abstract
-
Cited by 34 (21 self)
- Add to MetaCart
We perform co-analysis of a set of man-made 3D objects to allow the creation of novel instances derived from the set. We analyze the objects at the part level and treat the anisotropic part scales as a shape style. The co-analysis then allows style transfer to synthesize new objects. The key to co-analysis is part correspondence, where a major challenge is the handling of large style variations and diverse geometric content in the shape set. We propose style-content separation as a means to address this challenge. Specifically, we define a correspondence-free style signature for style clustering. We show that confining analysis to within a style cluster facilitates tasks such as co-segmentation, content classification, and deformation-driven part correspondence. With part correspondence between each pair of shapes in the set, style transfer can be easily performed. We demonstrate our analysis and synthesis results on several sets of man-made objects with style and content variations.
Sampling-based Contact-rich Motion Control
"... (a) A forward roll transformed to a dive roll. (b) A cartwheel retargeted to an Asimo-like robot. (c) A walk transformed onto a balance beam. Figure 1: Physically based motion transformation and retargeting. Human motions are the product of internal and external forces, but these forces are very dif ..."
Abstract
-
Cited by 23 (7 self)
- Add to MetaCart
(a) A forward roll transformed to a dive roll. (b) A cartwheel retargeted to an Asimo-like robot. (c) A walk transformed onto a balance beam. Figure 1: Physically based motion transformation and retargeting. Human motions are the product of internal and external forces, but these forces are very difficult to measure in a general setting. Given a motion capture trajectory, we propose a method to reconstruct its open-loop control and the implicit contact forces. The method employs a strategy based on randomized sampling of the control within user-specified bounds, coupled with forward dynamics simulation. Sampling-based techniques are well suited to this task because of their lack of dependence on derivatives, which are difficult to estimate in contact-rich scenarios. They are also easy to parallelize, which we exploit in our implementation on a compute cluster. We demonstrate reconstruction of a diverse set of captured motions, including walking, running, and contact rich tasks such as rolls and kip-up jumps. We further show how the method can be applied to physically based motion transformation and retargeting, physically plausible motion variations, and referencetrajectory-free idling motions. Alongside the successes, we point out a number of limitations and directions for future work. 1
Continuous Character Control with Low-Dimensional Embeddings
"... Figure 1: Character controllers created using our approach: animals, karate punching and kicking, and directional walking. Interactive, task-guided character controllers must be agile and responsive to user input, while retaining the flexibility to be readily authored and modified by the designer. C ..."
Abstract
-
Cited by 14 (1 self)
- Add to MetaCart
Figure 1: Character controllers created using our approach: animals, karate punching and kicking, and directional walking. Interactive, task-guided character controllers must be agile and responsive to user input, while retaining the flexibility to be readily authored and modified by the designer. Central to a method’s ease of use is its capacity to synthesize character motion for novel situations without requiring excessive data or programming effort. In this work, we present a technique that animates characters performing user-specified tasks by using a probabilistic motion model, which is trained on a small number of artist-provided animation clips. The method uses a low-dimensional space learned from the example motions to continuously control the character’s pose to accomplish the desired task. By controlling the character through a reduced space, our method can discover new transitions, tractably precompute a control policy, and avoid low quality poses.
Synthesis of responsive motion using a dynamic model
- Computer Graphics Forum (Eurographics Proceedings
, 2010
"... Synthesizing the movements of a responsive virtual character in the event of unexpected perturbations has proven a difficult challenge. To solve this problem, we devise a fully automatic method that learns a nonlinear probabilistic model of dynamic responses from very few perturbed walking sequences ..."
Abstract
-
Cited by 12 (1 self)
- Add to MetaCart
(Show Context)
Synthesizing the movements of a responsive virtual character in the event of unexpected perturbations has proven a difficult challenge. To solve this problem, we devise a fully automatic method that learns a nonlinear probabilistic model of dynamic responses from very few perturbed walking sequences. This model is able to synthesize responses and recovery motions under new perturbations different from those in the training examples. When perturbations occur, we propose a physics-based method that initiates motion transitions to the most probable response example based on the dynamic states of the character. Our algorithm can be applied to any motion sequences without the need for preprocessing such as segmentation or alignment. The results show that three perturbed motion clips can sufficiently generate a variety of realistic responses, and 14 clips can create a responsive virtual character that reacts realistically to external forces in different directions applied on different body parts at different moments in time. Categories and Subject Descriptors (according to ACM CCS): Computer Graphics [I.3.3]: Animation— 1.
Motion Graphs++: a Compact Generative Model for Semantic Motion Analysis and Synthesis
"... Figure 1: Semantic motion analysis (left) and synthesis (right) with our generative statistical model. This paper introduces a new generative statistical model that allows for human motion analysis and synthesis at both semantic and kinematic levels. Our key idea is to decouple complex variations of ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
Figure 1: Semantic motion analysis (left) and synthesis (right) with our generative statistical model. This paper introduces a new generative statistical model that allows for human motion analysis and synthesis at both semantic and kinematic levels. Our key idea is to decouple complex variations of human movements into finite structural variations and continuous style variations and encode them with a concatenation of morphable functional models. This allows us to model not only a rich repertoire of behaviors but also an infinite number of style variations within the same action. Our models are appealing for motion analysis and synthesis because they are highly structured, contact aware, and semantic embedding. We have constructed a compact generative motion model from a huge and heterogeneous motion database (about two hours mocap data and more than 15 different actions). We have demonstrated the power and effectiveness of our models by exploring a wide variety of applications, ranging from automatic motion segmentation, recognition, and annotation, and online/offline motion synthesis at both kinematics and behavior levels to semantic motion editing. We show the superiority of our model by comparing it with alternative methods.
A style controller for generating virtual human behaviors
"... Creating a virtual character that exhibits realistic physical behaviors requires a rich set of animations. To mimic the variety as well as the subtlety of human behavior, we may need to animate not only a wide range of behaviors but also variations of the same type of behavior influenced by the envi ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
(Show Context)
Creating a virtual character that exhibits realistic physical behaviors requires a rich set of animations. To mimic the variety as well as the subtlety of human behavior, we may need to animate not only a wide range of behaviors but also variations of the same type of behavior influenced by the environment and the state of the character, including the emotional and physiological state. A general approach to this challenge is to gather a set of animations produced by artists or motion capture. However, this approach can be extremely costly in time and effort. In this work, we propose a model that can learn styled motion generation and an algorithm that produce new styles of motions via style interpolation. The model takes a set of styled motions as training samples and creates new motions that are the generalization among the given styles. Our style interpolation algorithm can blend together motions with distinct styles, and improves on the performance of previous work. We verify our algorithm using walking motions of different styles, and the experimental results show that our method is significantly better than previous work.
Context-Aware Motion Diversification for Crowd Simulation
, 2010
"... Traditional crowd simulation models typically focus on the navigational path-finding and local collision avoidance aspects. Relatively few existing efforts explore how to optimally control individual agents ’ detailed motions throughout a crowd. In this paper we propose a novel scheme for dynamicall ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
Traditional crowd simulation models typically focus on the navigational path-finding and local collision avoidance aspects. Relatively few existing efforts explore how to optimally control individual agents ’ detailed motions throughout a crowd. In this paper we propose a novel scheme for dynamically controlling motion styles of agents to increase the motion variety of a crowd. The central idea of our scheme is to maximize the style variety of local neighbors and global style utilization while maintaining the style consistency for each agent as natural as possible. Our scheme can serve as a complementary layer for most high-level crowd models to increase the variety realism. We show the flexibility and superiority of our scheme over traditional random motion style distribution through several experiment scenarios and user evaluations. To assist the runtime diversity control, an off-line preprocessing algorithm is also proposed to extract and stylize primitive motions from a motion capture database. IEEE COMPUTER GRAPHICS AND APPLICATIONS 1
Diverse Motion Variations for Physics-based Character Animation
"... We present an optimization framework for generating diverse variations of physics-based character motions. This allows for the automatic synthesis of rich variations in style for simulated jumps, flips, and walks. While well-posed motion optimization problems result in a single optimal motion, we ex ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
We present an optimization framework for generating diverse variations of physics-based character motions. This allows for the automatic synthesis of rich variations in style for simulated jumps, flips, and walks. While well-posed motion optimization problems result in a single optimal motion, we explore using underconstrained motion descriptions and then optimizing for diversity. As input, the method takes a parameterized controller for a successful motion instance, a set of constraints that should be preserved, and a pairwise distance metric between motions. An offline optimization then produces a highly diverse set of motion styles for the same task. We demonstrate results for a variety of 2D and 3D physics-based motions and show that this approach can generate compelling new motions.
Linguistics as structure in computer animation: Toward a more effective synthesis of brow motion
- in American Sign Language. Sign Language & Linguistics
, 2011
"... Abstract Computer-generated three-dimensional animation holds great promise for synthesizing utterances in American Sign Language (ASL) that are not only grammatical, but welltolerated by members of the Deaf community. Unfortunately, animation poses several challenges stemming from the necessity of ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
Abstract Computer-generated three-dimensional animation holds great promise for synthesizing utterances in American Sign Language (ASL) that are not only grammatical, but welltolerated by members of the Deaf community. Unfortunately, animation poses several challenges stemming from the necessity of grappling with massive amounts of data. However, the linguistics of ASL can aid in surmounting the challenge by providing structure and rules for organizing animation data. An exploration of the linguistic and extra linguistic behavior of the brows from an animator's viewpoint yields a new approach for synthesizing nonmanuals that differs from the conventional animation of anatomy and instead offers a different approach for animating the effects of interacting levels of linguistic function. Results of formal testing with Deaf users have indicated that this is a promising approach.
Professeure, Grenoble INP, Directrice de thèse
"... et de l’École doctorale EDMSTII Sketching free-form poses and movements for expressive character animation Thèse soutenue publiquement le 2 juillet 2015, devant le jury composé de: Michiel van de Panne ..."
Abstract
- Add to MetaCart
et de l’École doctorale EDMSTII Sketching free-form poses and movements for expressive character animation Thèse soutenue publiquement le 2 juillet 2015, devant le jury composé de: Michiel van de Panne