Show simple item record

dc.contributorJovan Popovicen_US
dc.contributorComputer Graphicsen_US
dc.contributor.authorHsu, Eugeneen_US
dc.contributor.authorPulli, Karien_US
dc.contributor.authorPopovic, Jovanen_US
dc.date.accessioned2008-08-28T18:45:44Z
dc.date.available2008-08-28T18:45:44Z
dc.date.issued2005-08-01
dc.identifier.urihttp://hdl.handle.net/1721.1/42004
dc.description.abstractStyle translation is the process of transforming an input motion into a new style while preserving its original content. This problem is motivated by the needs of interactive applications, which require rapid processing of captured performances. Our solution learns to translate by analyzing differences between performances of the same content in input and output styles. It relies on a novel correspondence algorithm to align motions, and a linear time-invariant model to represent stylistic differences. Once the model is estimated with system identification, our system is capable of translating streaming input with simple linear operations at each frame.en_US
dc.format.extentN/Aen_US
dc.relation.ispartofseriesMassachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratoryen_US
dc.titleStyle Translation for Human Motion (Supplemental Material)en_US
dc.identifier.citationStyle translation is the process of transforming an input motion into a new style while preserving its original content. This problem is motivated by the needs of interactive applications, which require rapid processing of captured performances. Our solution learns to translate by analyzing differences between performances of the same content in input and output styles. It relies on a novel corresen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record