• Title/Summary/Keyword: 2D pose estimation

Search Result 102, Processing Time 0.029 seconds

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

Design of Robust Face Recognition System to Pose Variations Based on Pose Estimation : The Comparative Study on the Recognition Performance Using PCA and RBFNNs (포즈 추정 기반 포즈변화에 강인한 얼굴인식 시스템 설계 : PCA와 RBFNNs 패턴분류기를 이용한 인식성능 비교연구)

  • Ko, Jun-Hyun;Kim, Jin-Yul;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.9
    • /
    • pp.1347-1355
    • /
    • 2015
  • In this study, we compare the recognition performance using PCA and RBFNNs for introducing robust face recognition system to pose variations based on pose estimation. proposed face recognition system uses Honda/UCSD database for comparing recognition performance. Honda/UCSD database consists of 20 people, with 5 poses per person for a total of 500 face images. Extracted image consists of 5 poses using Multiple-Space PCA and each pose is performed by using (2D)2PCA for performing pose classification. Linear polynomial function is used as connection weight of RBFNNs Pattern Classifier and parameter coefficient is set by using Particle Swarm Optimization for model optimization. Proposed (2D)2PCA-based face pose classification performs recognition performance with PCA, (2D)2PCA and RBFNNs.

Deep Learning-Based Outlier Detection and Correction for 3D Pose Estimation (3차원 자세 추정을 위한 딥러닝 기반 이상치 검출 및 보정 기법)

  • Ju, Chan-Yang;Park, Ji-Sung;Lee, Dong-Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.10
    • /
    • pp.419-426
    • /
    • 2022
  • In this paper, we propose a method to improve the accuracy of 3D human pose estimation model in various move motions. Existing human pose estimation models have some problems of jitter, inversion, swap, miss that cause miss coordinates when estimating human poses. These problems cause low accuracy of pose estimation models to detect exact coordinates of human poses. We propose a method that consists of detection and correction methods to handle with these problems. Deep learning-based outlier detection method detects outlier of human pose coordinates in move motion effectively and rule-based correction method corrects the outlier according to a simple rule. We have shown that the proposed method is effective in various motions with the experiments using 2D golf swing motion data and have shown the possibility of expansion from 2D to 3D coordinates.

Recent Trends in Human Pose Estimation Based on a Single Image (단일 이미지에 기반을 둔 사람의 포즈 추정에 대한 연구 동향)

  • Cho, Jungchan
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.5
    • /
    • pp.31-42
    • /
    • 2019
  • With the recent development of deep learning technology, remarkable achievements have been made in many research areas of computer vision. Deep learning has also made dramatic improvement in two-dimensional or three-dimensional human pose estimation based on a single image, and many researchers have been expanding the scope of this problem. The human pose estimation is one of the most important research fields because there are various applications, especially it is a key factor in understanding the behavior, state, and intention of people in image or video analysis. Based on this background, this paper surveys research trends in estimating human poses based on a single image. Because there are various research results for robust and accurate human pose estimation, this paper introduces them in two separated subsections: 2D human pose estimation and 3D human pose estimation. Moreover, this paper summarizes famous data sets used in this field and introduces various studies which utilize human poses to solve their own problem.

3D Head Pose Estimation Using The Stereo Image (스테레오 영상을 이용한 3차원 포즈 추정)

  • 양욱일;송환종;이용욱;손광훈
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1887-1890
    • /
    • 2003
  • This paper presents a three-dimensional (3D) head pose estimation algorithm using the stereo image. Given a pair of stereo image, we automatically extract several important facial feature points using the disparity map, the gabor filter and the canny edge detector. To detect the facial feature region , we propose a region dividing method using the disparity map. On the indoor head & shoulder stereo image, a face region has a larger disparity than a background. So we separate a face region from a background by a divergence of disparity. To estimate 3D head pose, we propose a 2D-3D Error Compensated-SVD (EC-SVD) algorithm. We estimate the 3D coordinates of the facial features using the correspondence of a stereo image. We can estimate the head pose of an input image using Error Compensated-SVD (EC-SVD) method. Experimental results show that the proposed method is capable of estimating pose accurately.

  • PDF

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • v.33 no.5
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.

A Method for 3D Human Pose Estimation based on 2D Keypoint Detection using RGB-D information (RGB-D 정보를 이용한 2차원 키포인트 탐지 기반 3차원 인간 자세 추정 방법)

  • Park, Seohee;Ji, Myunggeun;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.19 no.6
    • /
    • pp.41-51
    • /
    • 2018
  • Recently, in the field of video surveillance, deep learning based learning method is applied to intelligent video surveillance system, and various events such as crime, fire, and abnormal phenomenon can be robustly detected. However, since occlusion occurs due to the loss of 3d information generated by projecting the 3d real-world in 2d image, it is need to consider the occlusion problem in order to accurately detect the object and to estimate the pose. Therefore, in this paper, we detect moving objects by solving the occlusion problem of object detection process by adding depth information to existing RGB information. Then, using the convolution neural network in the detected region, the positions of the 14 keypoints of the human joint region can be predicted. Finally, in order to solve the self-occlusion problem occurring in the pose estimation process, the method for 3d human pose estimation is described by extending the range of estimation to the 3d space using the predicted result of 2d keypoint and the deep neural network. In the future, the result of 2d and 3d pose estimation of this research can be used as easy data for future human behavior recognition and contribute to the development of industrial technology.

Automatic 3D Head Pose-Normalization using 2D and 3D Interaction (자동 3차원 얼굴 포즈 정규화 기법)

  • Yu, Sun-Jin;Kim, Joong-Rock;Lee, Sang-Youn
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.211-212
    • /
    • 2007
  • Pose-variation factors present a significant problem in 2D face recognition. To solve this problem, there are various approaches for a 3D face acquisition system which was able to generate multi-view images. However, this created another pose estimation problem in terms of normalizing the 3D face data. This paper presents a 3D head pose-normalization method using 2D and 3D interaction. The proposed method uses 2D information with the AAM(Active Appearance Model) and 3D information with a 3D normal vector. In order to verify the performance of the proposed method, we designed an experiment using 2.5D face recognition. Experimental results showed that the proposed method is robust against pose variation.

  • PDF

Head Pose Estimation Using Error Compensated Singular Value Decomposition for 3D Face Recognition (3차원 얼굴 인식을 위한 오류 보상 특이치 분해 기반 얼굴 포즈 추정)

  • 송환종;양욱일;손광훈
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.6
    • /
    • pp.31-40
    • /
    • 2003
  • Most face recognition systems are based on 2D images and applied in many applications. However, it is difficult to recognize a face when the pose varies severely. Therefore, head pose estimation is an inevitable procedure to improve recognition rate when a face is not frontal. In this paper, we propose a novel head pose estimation algorithm for 3D face recognition. Given the 3D range image of an unknown face as an input, we automatically extract facial feature points based on the face curvature. We propose an Error Compensated Singular Value Decomposition (EC-SVD) method based on the extracted facial feature points. We obtain the initial rotation angle based on the SVD method, and perform a refinement procedure to compensate for remained errors. The proposed algorithm is performed by exploiting the extracted facial features in the normaized 3D face space. In addition, we propose a 3D nearest neighbor classifier in order to select face candidates for 3D face recognition. From simulation results, we proved the efficiency and validity of the proposed algorithm.

Hard Example Generation by Novel View Synthesis for 3-D Pose Estimation (3차원 자세 추정 기법의 성능 향상을 위한 임의 시점 합성 기반의 고난도 예제 생성)

  • Minji Kim;Sungchan Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.9-17
    • /
    • 2024
  • It is widely recognized that for 3D human pose estimation (HPE), dataset acquisition is expensive and the effectiveness of augmentation techniques of conventional visual recognition tasks is limited. We address these difficulties by presenting a simple but effective method that augments input images in terms of viewpoints when training a 3D human pose estimation (HPE) model. Our intuition is that meaningful variants of the input images for HPE could be obtained by viewing a human instance in the images from an arbitrary viewpoint different from that in the original images. The core idea is to synthesize new images that have self-occlusion and thus are difficult to predict at different viewpoints even with the same pose of the original example. We incorporate this idea into the training procedure of the 3D HPE model as an augmentation stage of the input samples. We show that a strategy for augmenting the synthesized example should be carefully designed in terms of the frequency of performing the augmentation and the selection of viewpoints for synthesizing the samples. To this end, we propose a new metric to measure the prediction difficulty of input images for 3D HPE in terms of the distance between corresponding keypoints on both sides of a human body. Extensive exploration of the space of augmentation probability choices and example selection according to the proposed distance metric leads to a performance gain of up to 6.2% on Human3.6M, the well-known pose estimation dataset.