DOI QR코드

DOI QR Code

Comparative Evaluation of AI Driven Markerless Motion Capture Tools for Efficiency

  • Balgum Song (Department of International College, Dongseo University)
  • Received : 2024.09.15
  • Accepted : 2024.09.26
  • Published : 2024.11.30

Abstract

We explore the effectiveness of AI-driven markerless motion capture (MoCap) tools compared to the traditional marker-based OptiTrack system, known for its high accuracy in capturing precise movements. Through detailed comparative analysis, we assessed various free markerless MoCap tools, including Move One, Radical, Deep Motion, Plask, Rokoko, and Movmi, focusing on critical aspects such as pose accuracy, movement smoothness, and ground detection. Our findings indicate that Move One is the most versatile tool, offering excellent pose accuracy, smooth MoCap, and reliable ground detection, making it a strong contender for various animation tasks. We found that Radical excels in minimizing jitter, making it suitable for projects requiring smooth motion, while Deep Motion performs best in ground detection, which is crucial for accurate foot placement. Although markerless systems still do not fully match the precision of marker-based systems, we suggest that they present viable alternatives depending on the specific needs of a project. As AI technology continues to advance, we expect the gap between markerless and marker-based to narrow, expanding the potential applications of markerless MoCap in the industry.

Keywords

Acknowledgement

This work was supported by Dongseo University, "Dongseo Frontier Project" Research Fund of 2023

References

  1. T. Y. Mou, "Keyframe or Motion Capture? Reflections on Education of Character Animation." EURASIA Journal of Mathematics, Science and Technology Education, Vol. 14, No. 12, pp. 1-16, Oct 2018. DOI: https://doi.org/10.29333/ejmste/99174
  2. A. Humphrey, J. Adams, and R. Hoetzlein, "Developing Techniques for Rigging and Motion Capture to Simplify 3D Animation." Aquila-The FGCU Student Undergraduate Research Journal, pp. 36-45, April 2022. DOI: https://doi.org/10.24049/aq.7.1.6
  3. Y. Z. Sang, K. H. Kim, J. S. Lee, G. H. Zhang, Z. R. Liu, Q. R. Liu, S. J. Sun, Y. T. Wang, and K. X. Wang, "Application of Virtual Studio Technology and Digital Human Monocular Motion Capture Technology-Based on< Beast Town> as an Example." International Journal of Internet, Broadcasting and Communication 16, no. 1, pp. 106-123, Feb 2024. DOI: http://dx.doi.org/10.7236/IJIBC.2024.16.1.106
  4. C. Xi, and J. H. Chung, "A Case Study on AI-Driven< DEEPMOTION> Motion Capture Technology." International Journal of Internet, Broadcasting and Communication 16, no. 2, pp. 87-92, May 2024. DOI: https://doi.org/10.7236/IJIBC.2024.16.2.87
  5. G. Nagymate, and R. M. Kiss, "Application of OptiTrack motion capture systems in human movement analysis: A systematic literature review." Recent Innovations in Mechatronics 5, no. 1, pp. 1-9. April 2018 DOI: https://doi.org/10.17667/riim.2018.1/13
  6. D. Thewlis, C. Bishop, N. Daniell, and G. Paul, "Next-Generation Low-Cost Motion Capture Systems Can Provide Comparable Spatial Accuracy to High-End Systems." Journal of Applied Biomechanics, Vol. 29, No. 1, pp. 112-117, Feb 2013. DOI: https://doi.org/10.1123/jab.29.1.112
  7. J. Li, H. Gao, F. Zhang, and M. Zheng, "A Method of Human Motion Feature Extraction and Recognition Based on Motion Capture Device." Transactions on Edutainment XVI, pp. 184-195, April 2020. DOI: https://doi.org/10.1007/978-3-662-61510-2_18
  8. S. C. Puthenveetil, C. P. Daphalapurkar, W. Zhu, M. C. Leu, X. F. Liu, A. M. Chang, J. K. Gilpin-Mcminn, P. H. Wu, and S. D. Snodgrass, "Comparison of Marker-Based and Marker-Less Systems for Low-Cost Human Motion Capture." International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Vol. 55867, American Society of Mechanical Engineers, pp. 1-9, Aug 2013. DOI: https://doi.org/10.1115/detc2013-12653
  9. E. D. Aguiar, C. Stoll, C. Theobalt, N. Ahmed, H. Seidel, and S. Thrun, "Performance capture from sparse multiview video." In ACM SIGGRAPH 2008 papers, pp. 1-10, Aug 2008. DOI: https://doi.org/10.1145/1399504.1360697
  10. Z. D. Hou, K. Hong. Kim, D. J. Lee, and G. H. Zhang, "Real-time markerless facial motion capture of personalized 3D real human research." International Journal of Internet, Broadcasting and Communication 14, no. 1, pp. 129- 135, Feb 2022. DOI: https://doi.org/10.7236/IJIBC.2022.14.1.129
  11. V. Marek, L. Sigal, J. Hodgins, and O. Jenkins, "Video-based 3D motion capture through biped control." ACM Transactions On Graphics (TOG) 31, no. 4, pp. 1-12, July 2012. DOI: https://doi.org/10.1145/2185520.2335378
  12. N. Nakano, T. Sakura, K. Ueda, L. Omura, A. Kimura, Y. Iino, S. Fukashiro, and S. Yoshioka, "Evaluation of 3D markerless motion capture accuracy using OpenPose with multiple video cameras." Frontiers in sports and active living 2: 50, pp. 1-9, May 2020. DOI: https://doi.org/10.3389/fspor.2020.00050
  13. G. Papandreou, T. Zhu, L. Chen, S. Gidaris, J. Tompson, and K. Murphy, "Personlab: Person pose estimation and instance segmentation with a bottom-up, part-based, geometric embedding model." In Proceedings of the European conference on computer vision (ECCV), pp. 269-286, Mar 2018. DOI: https://doi.org/10.1007/978-3-030-01264-9_17
  14. S. Scataglini, E. Abts, C. V. Bocxlaer, M. V. Bussche, S. Meletani, and S. Truijen, "Accuracy, Validity, and Reliability of Markerless Camera-Based 3D Motion Capture Systems versus Marker-Based 3D Motion Capture Systems in Gait Analysis: A Systematic Review and Meta-Analysis." Sensors 24, no. 11, pp. 1-27, June 2024. DOI: https://doi.org/10.3390/s24113686
  15. L. Wade, L. Needham, P. McGuigan, and J. Bilzon, "Applications and limitations of current markerless motion capture methods for clinical gait biomechanics." PeerJ 10, pp. 1-27, Feb 2022. DOI: https://doi.org/10.7717/peerj.12995
  16. M. Menolotto, D.S. Komaris, S. Tedesco, B. O'Flynn, and M. Walsh, "Motion capture technology in industrial applications: A systematic review." Sensors 20, no. 19 , pp. 1-23, Oct 2020. DOI: https://doi.org/10.3390/s20195687