Perceptual Science Series
Computer Science Department, Courant Institute,
New York University
Capturing Motion Models for Animation
We will survey our current research efforts on vision based capture and
animation techniques applied to animals, humans, and cartoon characters. We will present new capture techniques that are able to track and infer
kinematic chain and 3D non-rigid blend-shape models from video data.
Furthermore we demonstrate how to use such motion capture data to estimate statistical models for synthesis and how to retarget motion to new
characters. We show several examples on capturing kangaroos, giraffes, human body deformations, facial expressions, animating hops and dances with natural fluctuations, and retargeting expressive cartoon motion.
This reports on joint work with Kathy Pullen, Lorie Loeb, Lorenzo
Torressani, Danny Yang, Gene Alexander, Erika Chuang, Hrishi Deshpande,
Rahul Gupta, Aaron Hertzmann, Henning Biermann