인간 동작 데이타로 애니메이션되는 아바타의 학습

Training Avatars Animated with Human Motion Data

  • 이강훈 (서울대학교 컴퓨터공학부) ;
  • 이제희 (서울대학교 컴퓨터공학부)
  • 발행 : 2006.04.01

초록

제어 가능하고 상황에 따라 반응하는 아바타의 제작은 컴퓨터 게임 및 가상현실 분야에서 중요한 연구 주제이다. 최근에는 아바타 애니메이션과 제어의 사실성을 높이기 위해 대규모 동작 캡처 데이타가 활용되고 있다. 방대한 양의 동작 데이타는 넓은 범위의 자연스러운 인간 동작을 수용할 수 있다는 장점을 갖는다. 하지만 동작 데이타가 많아지면 적절한 동작을 찾는데 필요한 계산량이 크게 증가하여 대화형 아바타 제어에 있어 병목으로 작용한다. 이 논문에서 우리는 레이블링(labeling)이 되어있지 않은 모션 데이타로부터 아바타의 행동을 학습시키는 새로운 방법을 제안한다. 이 방법을 사용하면 최소의 실시간 비용으로 아바타를 애니메이션하고 제어하는 것이 가능하다. 본 논문에서 제시하는 알고리즘은 Q-러닝이라는 기계 학습 기법에 기초하여 아바타가 동적인 환경과의 상호작용에 따른 시행착오를 통해 주어진 상황에 어떻게 반응할지 학습하도록 한다. 이 접근 방식의 유효성은 아바타가 서로 간에, 그리고 사용자에 대해 상호작용하는 예를 보임으로써 증명한다.

Creating controllable, responsive avatars is an important problem in computer games and virtual environments. Recently, large collections of motion capture data have been exploited for increased realism in avatar animation and control. Large motion sets have the advantage of accommodating a broad variety of natural human motion. However, when a motion set is large, the time required to identify an appropriate sequence of motions is the bottleneck for achieving interactive avatar control. In this paper, we present a novel method for training avatar behaviors from unlabelled motion data in order to animate and control avatars at minimal runtime cost. Based on machine learning technique, called Q-teaming, our training method allows the avatar to learn how to act in any given situation through trial-and-error interactions with a dynamic environment. We demonstrate the effectiveness of our approach through examples that include avatars interacting with each other and with the user.

키워드

참고문헌

  1. Arikan, O., and Forsyth, D. A, 'Interactive motion generation from examples,' Proceedings of SIGGRAPH 2002, pp. 483-490, 2002 https://doi.org/10.1145/566570.566606
  2. Brand, M., and Hertzmann, A, 'Style machines,' Proceedings of SIGGRAPH 2000, pp. 183-192, 2000 https://doi.org/10.1145/344779.344865
  3. Galata, A, Johnson, N., and Hogg, D., 'Learning variable length markov models of behaviour,' Computer Vision and Image Understanding (CVIU) Journal, Vol.81, No.3 (March), pp. 398-413, 2001 https://doi.org/10.1006/cviu.2000.0894
  4. Kovar, L., Gleicher, M., and Pighin, F., 'Motion graphs,' Proceedings of SIGGRAPH 2002, pp. 473-482, 2002
  5. Kim, T., Park, S. I., and Shin, S. Y., 'Rhythmicmotion synthesis based on motion-beat analysis,' ACM Transactions on Graphics (SIGGRAPH 2003), Vol.22, No.3, pp. 392-401, 2003 https://doi.org/10.1145/882262.882283
  6. Lee J., Chai, J., Reitsma, P. S. A., Hodgins, J. K., and Pollard, N. S., 'Interactive control of avatars animated with human motion data,' Proceedings of SIGGRAPH 2002, pp. 491-500, 2002 https://doi.org/10.1145/566570.566607
  7. Li Y., Wang T., and Shum, H.-Y., 'Motion texture: a two-level statistical model for character motion synthesis,' Proceedings of SIGGRAPH 2002, pp. 456-472, 2002 https://doi.org/10.1145/566570.566604
  8. Pullen , K., and Bregler, C., 'Motion capture assisted animation: Textureing and synthesis,' Proceedings of SIGGRAPH 2002, pp. 501-508, 2002 https://doi.org/10.1145/566654.566608
  9. Sidenbladh, H., Black, M. J., and Sigal, L., 'Implicit probabilistic models of human motion for synthesis and tracking,' European Conference on Computer Vision (ECCV), pp. 784-800, 2002
  10. Molina Tanco, L., and Hilton, A., 'Realistic synthesis of novel human movements from a database of motion capture examples,' Proceedings of the Workshop on Human Motion, pp. 137-142, 2000 https://doi.org/10.1109/HUMO.2000.897383
  11. Blumberg, B. M., and Galyean, T. A., 'Multi-level direction of autonomous creatures for real-time virtual environments,' Proceedings of SIGGRAPH 95, pp. 47-54, 1995 https://doi.org/10.1145/218380.218405
  12. Blumberg, B., 'Swamped! Using plush toys to direct autonomous animated characters,' SIGGRAPH 98 Conference Abstracts and Applications, p.109, 1998 https://doi.org/10.1145/280953.281021
  13. Bruderlin, A., and Calvert, T. W., 'Goal-directed, dynamic animation of human walking,' Computer Graphics (Proceedings of SIGGRAPH 89), Vol.23, pp. 233-242, 1989 https://doi.org/10.1145/74333.74357
  14. Noma, T., Zhao, L., and Badler, N. I., 'Design of a virtual human presenter,' IEEE Computer Graphics & Applications, Vol.20, No.4 (July/August), 2000 https://doi.org/10.1109/38.851755
  15. Perlin, K., and Goldberg, A, 'Improv: A system for scripting interactive actors in virtual worlds,' Proceedings of SIGGRAPH 96, pp. 205-216, 1996 https://doi.org/10.1145/237170.237258
  16. Badler, N. I., Hollick, M., and Granieri, J., 'Realtime control of a virtual human using minimal sensors,' Presence 2, pp. 82-86, 1993 https://doi.org/10.1162/pres.1993.2.1.82
  17. Dontcheva, M., Yngve, G., and Popovic, Z., 'Layered acting for character animation,' ACM Transaction of Graphics (SIGGRAPH 2003), Vol.22, No.3, pp. 409-416, 2003 https://doi.org/10.1145/882262.882285
  18. Molet, T., Boulic, R., and Thalmann, D., 'A realtime anatomical converter for human motion capture,' EGCAS '96: Seventh International Workshop on Computer Animation and Simulation, Eurographics, 1996
  19. Semwal, S., Hightower, R., and Stansfield, S., 'Mapping algorithms for real-time control of an avatar using eight sensors,' Presence 7, No.1, pp. 1-21, 1998 https://doi.org/10.1162/105474698565497
  20. Shin, H. J., Lee, J., Shin, S. Y., and Gleicher, M., 'Computer puppetry: An importance-based approach,' ACM Transactions on Graphics, Vol.20, No.2, pp. 67-94, 2001 https://doi.org/10.1145/502122.502123
  21. Bradley, E., and Stuart, J., 'Using chaos to generate choreographic variations,' Proceedings of the Experimental Chaos Conference, 1997
  22. Pullen, K., and Bregler, C., 'Animating by multilevel sampling,' Computer Animation 2000, IEEE CS Press, pp. 36-42, 2000
  23. Bowden, R, 'Learning statistical models of human motion,' IEEE Workshop on Human Modelling, Analysis and Synthesis, CVPR2000, 2000
  24. Choi, M. G., Lee, J., and Shin, S. Y., 'Planning beped locomotion using motion capture data and probabilistic roadmaps,' ACM Transactions on Graphics, Vol.22, No.2, pp. 182-203, 2003 https://doi.org/10.1145/636886.636889
  25. Arikan, O., and Forsyth, D. A., 'Interactive motion generation from examples,' Proceedings of SIGGRAPH 2002, pp. 483-490, 2002 https://doi.org/10.1145/566570.566606
  26. Ngo, J. T., and Marks, J., 'Spacetime constraints revisited,' Proceedings of SIGGRAPH 93, pp. 343-350, 1993 https://doi.org/10.1145/166117.166160
  27. Sims, K., 'Evolving virtual creatures,' Proceedings of SIGGRAPH 94, pp. 15-22, 1994 https://doi.org/10.1145/192161.192167
  28. Grzeszczuk, R., and Terzopoulos, D., 'Automated learning of muscle-actuated locomotion through control abstraction,' Proceedings of SIGGRAPH 95, pp. 63-70, 1995 https://doi.org/10.1145/218380.218411
  29. Grzeszczuk, R., Terzopoulos, D., and Hinton, G., 'Neuroanimator: fast neural network emulation and control of physics-based models,' Proceedings of SIGGRAPH 98, pp. 9-20, 1998 https://doi.org/10.1145/280814.280816
  30. Faloutsos, P., Van De Panne, M., and Terzopoulos, D., 'Composable controllers for physicsbased character animation,' Proceedings of SIGGRAPH 2002, pp. 251-260, 2001
  31. Kaelbling, L. P., Littman, M. L., and Moore, A. W., 'Reinforcement learning: A survey,' Journal of Artificial Intelligence Research 4, pp. 237-285, 1996
  32. Sutton, R. S., and Barto, A. G., 'Reinforcement Learning: An Introduction,' MIT Press, 1998
  33. Atkeson, C., Moore, A., and Schaal, S., 'Locally weighted learning for control,' Al Review 11, pp. 75-113, 1997
  34. Mataric, M. J., 'Reward functions for accelerated learning,' Proceedings of the Eleventh International Conference on Machine Learning, 1994
  35. Blumberg, B., Downie, M., Ivanov, Y., Berlin, M., Johnson, M. P., and Tomlinson, B., 'Integrated learning for interactive synthetic characters,' Proceedings of SIGGRAPH 2002, pp. 417-426, 2002 https://doi.org/10.1145/566570.566597
  36. Ng, R., Ramamoorthi, R., and Hanrahan, P., 'Allfrequency shadows using non-linear wavelet lighting approximation,' ACM Transactions on Graphics (SIGGRAPH 2003), Vol.22, No.3, pp. 376-381, 2003 https://doi.org/10.1145/882262.882280
  37. Sloan, P.-P., Hall, J., Hart, J., and Snyder, J., 'Clustered principal components for precomputed radiance transfer,' ACM Transactions on Graphics (SIGGRAPH 2003), Vol.22, No.3, pp. 382-391, 2003 https://doi.org/10.1145/882262.882281
  38. Sloan, P.-P., Liu, X., Shum, H.-Y., and Snyder, J., 'Bi-scale radiance transfer,' ACM Transactions on Graphics (SIGGRAPH 2003), Vol.22, No.3, pp. 370-375, 2003 https://doi.org/10.1145/882262.882279
  39. James, D. L., and Fatahalian, K., 'Precomputing interactive dynamic deformable scenes,' ACM Transactions on Graphics (SIGGRAPH 2003), Vol.22, No.3, pp. 879-887, 2003 https://doi.org/10.1145/1201775.882359
  40. Watkins, C. J. C. H., and Dayan, P., 'Q-learning,' Machine Learning, Vol.8, No.3, pp. 279-292, 1992 https://doi.org/10.1023/A:1022676722315
  41. Moore, A., and Atkeson, C., 'The parti-garne algorithm for variable resolution reinforcement learning in multidimensional state-spaces,' Machine Learning, Vol.21, 1995
  42. Zordan, V. B., and Hodgins, J. K., 'Motion capture-driven simulations that hit and react,' Proceedings of ACM SIGGRAPH Symposium on Computer Animation, pp. 89-96, 2002 https://doi.org/10.1145/545261.545276