2012年
Creating facial animation of characters via MoCap data
JOURNAL OF APPLIED STATISTICS
- ,
- 巻
- 39
- 号
- 12
- 開始ページ
- 2583
- 終了ページ
- 2597
- 記述言語
- 英語
- 掲載種別
- 研究論文(学術雑誌)
- DOI
- 10.1080/02664763.2012.724391
- 出版者・発行元
- TAYLOR & FRANCIS LTD
We consider the problem of generating 3D facial animation of characters. An efficient procedure is realized by using the motion capture data (MoCap data), which is obtained by tracking the facial markers from an actor/actress. In some cases of artistic animation, the MoCap actor/actress and the 3D character facial animation show different expressions. For example, from the original facial MoCap data of speaking, a user would like to create the character facial animation of speaking with a smirk. In this paper, we propose a new easy-to-use system for making character facial animation via MoCap data. Our system is based on the interpolation: once the character facial expressions of the starting and the ending frames are given, the intermediate frames are automatically generated by information from the MoCap data. The interpolation procedure consists of three stages. First, the time axis of animation is divided into several intervals by the fused lasso signal approximator. In the second stage, we use the kernel k-means clustering to obtain control points. Finally, the interpolation is realized by using the control points. The user can easily create a wide variety of 3D character facial expressions by changing the control points.
- リンク情報
- ID情報
-
- DOI : 10.1080/02664763.2012.724391
- ISSN : 0266-4763
- eISSN : 1360-0532
- Web of Science ID : WOS:000310130900004