• Title/Summary/Keyword: Pose to Pose

Search Result 1,677, Processing Time 0.036 seconds

Pose Creation of Character in Two-Dimensional Cartoon through Human Pose Estimation (인간자세 추정방법에 의한 2차원 웹툰 캐릭터 포즈 생성)

  • Jeong, Hieyong;Shin, Choonsung
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.718-727
    • /
    • 2022
  • The Korean domestic cartoon industry has grown explosively by 65% compared to the previous year. Then the market size is expected to exceed KRW 1 trillion. However, excessive work results in health deterioration. Moreover, this working environment makes the production of human resources insufficient, repeating a vicious cycle. Although some tasks require creation activity during cartoon production, there are still a lot of simple repetitive tasks. Therefore, this study aimed to develop a method for creating a character pose through human pose estimation (HPE). The HPE is to detect key points for each joint of a user. The primary role of the proposed method was to make each joint of the character match that of the human. The proposed method enabled us to create the pose of the two-dimensional cartoon character through the results. Furthermore, it was possible to save the static image for one character pose and the video for continuous character pose.

A Segmentation Guided Coarse to Fine Virtual Try-on Network for a new Clothing and Pose

  • Sandagdorj, Dashdorj;Tuan, Thai Thanh;Ahn, Heejune
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.33-36
    • /
    • 2020
  • Virtual try on is getting interested from researchers these days because its application in online shopping. But single pose virtual try on is not enough, customer may want to see themselves in different pose. Multiple pose virtual try on is getting input as customer image, an in-shop cloth and a target pose, it will try to generate realistic customer wearing the in-shop cloth with the target pose. We first generate the target segmentation layout using conditional generative network (cGAN), and then the in-shop cloth are warped to fit the customer body in target pose. Finally, all the result will be combine using a Resnet-like network. We experiment and show that our method outperforms stage of the art.

  • PDF

Empirical Comparison of Deep Learning Networks on Backbone Method of Human Pose Estimation

  • Rim, Beanbonyka;Kim, Junseob;Choi, Yoo-Joo;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.21-29
    • /
    • 2020
  • Accurate estimation of human pose relies on backbone method in which its role is to extract feature map. Up to dated, the method of backbone feature extraction is conducted by the plain convolutional neural networks named by CNN and the residual neural networks named by Resnet, both of which have various architectures and performances. The CNN family network such as VGG which is well-known as a multiple stacked hidden layers architecture of deep learning methods, is base and simple while Resnet which is a bottleneck layers architecture yields fewer parameters and outperform. They have achieved inspired results as a backbone network in human pose estimation. However, they were used then followed by different pose estimation networks named by pose parsing module. Therefore, in this paper, we present a comparison between the plain CNN family network (VGG) and bottleneck network (Resnet) as a backbone method in the same pose parsing module. We investigate their performances such as number of parameters, loss score, precision and recall. We experiment them in the bottom-up method of human pose estimation system by adapted the pose parsing module of openpose. Our experimental results show that the backbone method using VGG network outperforms the Resent network with fewer parameter, lower loss score and higher accuracy of precision and recall.

Developing Interactive Game Contents using 3D Human Pose Recognition (3차원 인체 포즈 인식을 이용한 상호작용 게임 콘텐츠 개발)

  • Choi, Yoon-Ji;Park, Jae-Wan;Song, Dae-Hyeon;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.619-628
    • /
    • 2011
  • Normally vision-based 3D human pose recognition technology is used to method for convey human gesture in HCI(Human-Computer Interaction). 2D pose model based recognition method recognizes simple 2D human pose in particular environment. On the other hand, 3D pose model which describes 3D human body skeletal structure can recognize more complex 3D pose than 2D pose model in because it can use joint angle and shape information of body part. In this paper, we describe a development of interactive game contents using pose recognition interface that using 3D human body joint information. Our system was proposed for the purpose that users can control the game contents with body motion without any additional equipment. Poses are recognized comparing current input pose and predefined pose template which is consist of 14 human body joint 3D information. We implement the game contents with the our pose recognition system and make sure about the efficiency of our proposed system. In the future, we will improve the system that can be recognized poses in various environments robustly.

Pictorial Model of Upper Body based Pose Recognition and Particle Filter Tracking (그림모델과 파티클필터를 이용한 인간 정면 상반신 포즈 인식)

  • Oh, Chi-Min;Islam, Md. Zahidul;Kim, Min-Wook;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.186-192
    • /
    • 2009
  • In this paper, we represent the recognition method for human frontal upper body pose. In HCI(Human Computer Interaction) and HRI(Human Robot Interaction) when a interaction is established the human has usually frontal direction to the robot or computer and use hand gestures then we decide to focus on human frontal upper-body pose, The two main difficulties are firstly human pose is consist of many parts which cause high DOF(Degree Of Freedom) then the modeling of human pose is difficult. Secondly the matching between image features and modeling information is difficult. Then using Pictorial Model we model the human main poses which are mainly took the space of frontal upper-body poses and we recognize the main poses by making main pose database. using determined main pose we used the model parameters for particle filter which predicts the posterior distribution for pose parameters and can determine more specific pose by updating model parameters from the particle having the maximum likelihood. Therefore based on recognizing main poses and tracking the specific pose we recognize the human frontal upper body poses.

  • PDF

Robust 2D human upper-body pose estimation with fully convolutional network

  • Lee, Seunghee;Koo, Jungmo;Kim, Jinki;Myung, Hyun
    • Advances in robotics research
    • /
    • v.2 no.2
    • /
    • pp.129-140
    • /
    • 2018
  • With the increasing demand for the development of human pose estimation, such as human-computer interaction and human activity recognition, there have been numerous approaches to detect the 2D poses of people in images more efficiently. Despite many years of human pose estimation research, the estimation of human poses with images remains difficult to produce satisfactory results. In this study, we propose a robust 2D human body pose estimation method using an RGB camera sensor. Our pose estimation method is efficient and cost-effective since the use of RGB camera sensor is economically beneficial compared to more commonly used high-priced sensors. For the estimation of upper-body joint positions, semantic segmentation with a fully convolutional network was exploited. From acquired RGB images, joint heatmaps accurately estimate the coordinates of the location of each joint. The network architecture was designed to learn and detect the locations of joints via the sequential prediction processing method. Our proposed method was tested and validated for efficient estimation of the human upper-body pose. The obtained results reveal the potential of a simple RGB camera sensor for human pose estimation applications.

Novel Backprojection Method for Monocular Head Pose Estimation

  • Ju, Kun;Shin, Bok-Suk;Klette, Reinhard
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.1
    • /
    • pp.50-58
    • /
    • 2013
  • Estimating a driver's head pose is an important task in driver-assistance systems because it can provide information about where a driver is looking, thereby giving useful cues about the status of the driver (i.e., paying proper attention, fatigued, etc.). This study proposes a system for estimating the head pose using monocular images, which includes a novel use of backprojection. The system can use a single image to estimate a driver's head pose at a particular time stamp, or an image sequence to support the analysis of a driver's status. Using our proposed system, we compared two previous pose estimation approaches. We introduced an approach for providing ground-truth reference data using a mannequin model. Our experimental results demonstrate that the proposed system provides relatively accurate estimations of the yaw, tilt, and roll angle. The results also show that one of the pose estimation approaches (perspective-n-point, PnP) provided a consistently better estimate compared to the other (pose from orthography and scaling with iterations, POSIT) using our proposed system.

Valve Modeling and Model Extraction on 3D Point Cloud data (잡음이 있는 3차원 점군 데이터에서 밸브 모델링 및 모델 추출)

  • Oh, Ki Won;Choi, Kang Sun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.12
    • /
    • pp.77-86
    • /
    • 2015
  • It is difficult to extract small valve automatically in noisy 3D point cloud obtained from LIDAR because small object is affected by noise considerably. In this paper, we assume that the valve is a complex model consisting of torus, cylinder and plane represents handle, rib and center plane to extract a pose of the valve. And to extract the pose, we received additional input: center of the valve. We generated histogram of distance between the center and each points of point cloud, and obtain pose of valve by extracting parameters of handle, rib and center plane. Finally, the valve is reconstructed.

A Spatial-Temporal Three-Dimensional Human Pose Reconstruction Framework

  • Nguyen, Xuan Thanh;Ngo, Thi Duyen;Le, Thanh Ha
    • Journal of Information Processing Systems
    • /
    • v.15 no.2
    • /
    • pp.399-409
    • /
    • 2019
  • Three-dimensional (3D) human pose reconstruction from single-view image is a difficult and challenging topic. Existing approaches mostly process frame-by-frame independently while inter-frames are highly correlated in a sequence. In contrast, we introduce a novel spatial-temporal 3D human pose reconstruction framework that leverages both intra and inter-frame relationships in consecutive 2D pose sequences. Orthogonal matching pursuit (OMP) algorithm, pre-trained pose-angle limits and temporal models have been implemented. Several quantitative comparisons between our proposed framework and recent works have been studied on CMU motion capture dataset and Vietnamese traditional dance sequences. Our framework outperforms others by 10% lower of Euclidean reconstruction error and more robust against Gaussian noise. Additionally, it is also important to mention that our reconstructed 3D pose sequences are more natural and smoother than others.

Face Recognition Robust to Pose Variations (포즈 변화에 강인한 얼굴 인식)

  • 노진우;문인혁;고한석
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.63-69
    • /
    • 2004
  • This paper proposes a novel method for achieving pose-invariant face recognition using cylindrical model. On the assumption that a face is shaped like that of a cylinder, we estimate the object's pose and then extract the frontal face image via a pose transform with previously estimated pose angle. By employing the proposed pose transform technique we can increase the face recognition performance using the frontal face images. Through representative experiments, we achieved an increased recognition rate from 61.43% to 94.76% by the pose transform. Additionally, the recognition rate with the proposed method achieves as good as that of the more complicated 3D face model.