Browse > Article
http://dx.doi.org/10.5909/JBE.2022.27.5.638

3D Clothes Modeling of Virtual Human for Metaverse  

Kim, Hyun Woo (Inha University, Department of Information and Communication Engineering)
Kim, Dong Eon (Inha University, Department of Information and Communication Engineering)
Kim, Yujin (Inha University, Department of Information and Communication Engineering)
Park, In Kyu (Inha University, Department of Information and Communication Engineering)
Publication Information
Journal of Broadcast Engineering / v.27, no.5, 2022 , pp. 638-653 More about this Journal
Abstract
In this paper, we propose the new method of creating 3D virtual-human reflecting the pattern of clothes worn by the person in the high-resolution whole body front image and the body shape data about the person. To get the pattern of clothes, we proceed Instance Segmentation and clothes parsing using Cascade Mask R-CNN. After, we use Pix2Pix to blur the boundaries and estimate the background color and can get UV-Map of 3D clothes mesh proceeding UV-Map base warping. Also, we get the body shape data using SMPL-X and deform the original clothes and body mesh. With UV-Map of clothes and deformed clothes and body mesh, user finally can see the animation of 3D virtual-human reflecting user's appearance by rendering with the state-of-the game engine, i.e. Unreal Engine.
Keywords
Metaverse; virtual human; 3D cloth modeling;
Citations & Related Records
Times Cited By KSCI : 2  (Citation Analysis)
연도 인용수 순위
1 Y. Ge, R. Zhang, X. Wang, X. Tang, and P. Luo, "DeepFashion2: A versatile benchmark for detection, pose estimation, segmentation and re-identification of clothing images," Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2019. doi: https://doi.org/10.1109/CVPR.2019.00548   DOI
2 Z. Cai and N. Vasconcelos, "Cascade R-CNN: High quality object detection and instance segmentation," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 43, no. 5, pp. 1483-1498, 2019. doi: https://doi.org/10.1109/TPAMI.2019.2956516   DOI
3 M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black, "SMPL: A Skinned multi-person linear model," ACM Trans. on Graphics, vol. 34, no. 6, pp. 1-16, 2015. doi: https://doi.org/10.1145/2816795.2818013   DOI
4 D. P. Kingma and J. L. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014. doi: https://doi.org/10.48550/arXiv.1412.6980   DOI
5 Blender, https://www.blender.org (accessed June 13, 2022)
6 J. Deng, S. Cheng, N. Xue, Y. Zhou, and S. Zafeiriou, "UV-GAN: Adversarial Facial UV Map Completion for Pose-invariant Face Recognition," arXiv preprint arXiv:1712.04695, 2017. doi: https://doi.org/10.48550/arXiv.1712.04695   DOI
7 X. Han, Z. Wu, Z. Wu, R. Yu, and L. S. Davis, "VITON: An imagebased virtual try-on network," arXiv preprint arXiv:1711.08447, 2017. doi: https://doi.org/10.48550/arXiv.1711.08447   DOI
8 Metahuman Creator, https://www.unrealengine.com/ko/metahuman (accessed June 13, 2022)
9 P. Isola, J. -Y. Zhu, T. Zhou, and A. A. Efros, "Image-to-image translation with conditional adversarial networks," Proc. IEEE Conference on Computer Vision and Pattern Recognition, June 2017. doi: https://doi.org/10.1109/CVPR.2017.632   DOI
10 Unreal Engine, https://www.unrealengine.com/ (accessed June 13, 2022)
11 G. Pavlakos, V. Choutas, N. Ghorbani, T. Bolkart, A. A. A. Osman, D. Tzionas, and M. J. Black, "Expressive body capture: 3D hands, face, and body from a single image," Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2019. doi: https://doi.org/10.1109/CVPR.2019.01123   DOI
12 Fashion Product Images Dataset, https://www.kaggle.com/datasets/paramaggarwal/fashion-product-images-dataset (accessed June 13, 2022)
13 K. Chen, J. Wang, J. Pang, Y. Cao, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu, J. Xu, Z. Zhang, D. Cheng, C. Zhu, T. Cheng, Q. Zhao, B. Li, X. Lu, R. Zhu, Y. Wu, J. Dai, J. Wang, J. Shi, W. Ouyang, C. C. Loy, and D. Lin, "MMDetection: Open MMLab detection toolbox and benchmark," arXiv preprint arXiv:1906.07155, 2019. doi: https://doi.org/10.48550/arXiv.1906.07155   DOI