• Title/Summary/Keyword: face pose estimation

Search Result 52, Processing Time 0.03 seconds

A 3D Face Reconstruction and Tracking Method using the Estimated Depth Information (얼굴 깊이 추정을 이용한 3차원 얼굴 생성 및 추적 방법)

  • Ju, Myung-Ho;Kang, Hang-Bong
    • The KIPS Transactions:PartB
    • /
    • v.18B no.1
    • /
    • pp.21-28
    • /
    • 2011
  • A 3D face shape derived from 2D images may be useful in many applications, such as face recognition, face synthesis and human computer interaction. To do this, we develop a fast 3D Active Appearance Model (3D-AAM) method using depth estimation. The training images include specific 3D face poses which are extremely different from one another. The landmark's depth information of landmarks is estimated from the training image sequence by using the approximated Jacobian matrix. It is added at the test phase to deal with the 3D pose variations of the input face. Our experimental results show that the proposed method can efficiently fit the face shape, including the variations of facial expressions and 3D pose variations, better than the typical AAM, and can estimate accurate 3D face shape from images.

Pose Estimation of Face Using 3D Model and Optical Flow in Real Time (3D 모델과 Optical flow를 이용한 실시간 얼굴 모션 추정)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.780-785
    • /
    • 2006
  • HCI, 비전 기반 사용자 인터페이스 또는 제스쳐 인식과 같은 많은 분야에서 3 차원 얼굴 모션을 추정하는 것은 중요한 작업이다. 연속된 2 차원 이미지로부터 3 차원 모션을 추정하기 위한 방법으로는 크게 외형 기반 방법이나 모델을 이용하는 방법이 있다. 본 연구에서는 동영상으로부터 3 차원 실린더 모델과 Optical flow를 이용하여 실시간으로 얼굴 모션을 추정하는 방법을 제안하고자 한다. 초기 프레임으로부터 얼굴의 피부색과 템플릿 매칭을 이용하여 얼굴 영역을 검출하고 검출된 얼굴 영역에 3 차원 실린더 모델을 투영하게 된다. 연속된 프레임으로 부터 Lucas-Kanade 의 Optical flow 를 이용하여 얼굴 모션을 추정한다. 정확한 얼굴 모션 추정을 하기 위해 IRLS 방법을 이용하여 각 픽셀에 대한 가중치를 설정하게 된다. 또한, 동적 템플릿을 이용해 오랫동안 정확한 얼굴 모션 추정하는 방법을 제안한다.

  • PDF

A Hybrid Approach of Efficient Facial Feature Detection and Tracking for Real-time Face Direction Estimation (실시간 얼굴 방향성 추정을 위한 효율적인 얼굴 특성 검출과 추적의 결합방법)

  • Kim, Woonggi;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.117-124
    • /
    • 2013
  • In this paper, we present a new method which efficiently estimates a face direction from a sequences of input video images in real time fashion. For this work, the proposed method performs detecting the facial region and major facial features such as both eyes, nose and mouth by using the Haar-like feature, which is relatively not sensitive against light variation, from the detected facial area. Then, it becomes able to track the feature points from every frame using optical flow in real time fashion, and determine the direction of the face based on the feature points tracked. Further, in order to prevent the erroneously recognizing the false positions of the facial features when if the coordinates of the features are lost during the tracking by using optical flow, the proposed method determines the validity of locations of the facial features using the template matching of detected facial features in real time. Depending on the correlation rate of re-considering the detection of the features by the template matching, the face direction estimation process is divided into detecting the facial features again or tracking features while determining the direction of the face. The template matching initially saves the location information of 4 facial features such as the left and right eye, the end of nose and mouse in facial feature detection phase and reevaluated these information when the similarity measure between the stored information and the traced facial information by optical flow is exceed a certain level of threshold by detecting the new facial features from the input image. The proposed approach automatically combines the phase of detecting facial features and the phase of tracking features reciprocally and enables to estimate face pose stably in a real-time fashion. From the experiment, we can prove that the proposed method efficiently estimates face direction.

Robust AAM-based Face Tracking with Occlusion Using SIFT Features (SIFT 특징을 이용하여 중첩상황에 강인한 AAM 기반 얼굴 추적)

  • Eom, Sung-Eun;Jang, Jun-Su
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.355-362
    • /
    • 2010
  • Face tracking is to estimate the motion of a non-rigid face together with a rigid head in 3D, and plays important roles in higher levels such as face/facial expression/emotion recognition. In this paper, we propose an AAM-based face tracking algorithm. AAM has been widely used to segment and track deformable objects, but there are still many difficulties. Particularly, it often tends to diverge or converge into local minima when a target object is self-occluded, partially or completely occluded. To address this problem, we utilize the scale invariant feature transform (SIFT). SIFT is an effective method for self and partial occlusion because it is able to find correspondence between feature points under partial loss. And it enables an AAM to continue to track without re-initialization in complete occlusions thanks to the good performance of global matching. We also register and use the SIFT features extracted from multi-view face images during tracking to effectively track a face across large pose changes. Our proposed algorithm is validated by comparing other algorithms under the above 3 kinds of occlusions.

Estimation of a Gaze Point in 3D Coordinates using Human Head Pose (휴먼 헤드포즈 정보를 이용한 3차원 공간 내 응시점 추정)

  • Shin, Chae-Rim;Yun, Sang-Seok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.177-179
    • /
    • 2021
  • This paper proposes a method of estimating location of a target point at which an interactive robot gazes in an indoor space. RGB images are extracted from low-cost web-cams, user head pose is obtained from the face detection (Openface) module, and geometric configurations are applied to estimate the user's gaze direction in the 3D space. The coordinates of the target point at which the user stares are finally measured through the correlation between the estimated gaze direction and the plane on the table plane.

  • PDF

Facial Feature Extraction with Its Applications

  • Lee, Minkyu;Lee, Sangyoun
    • Journal of International Society for Simulation Surgery
    • /
    • v.2 no.1
    • /
    • pp.7-9
    • /
    • 2015
  • Purpose In the many face-related application such as head pose estimation, 3D face modeling, facial appearance manipulation, the robust and fast facial feature extraction is necessary. We present the facial feature extraction method based on shape regression and feature selection for real-time facial feature extraction. Materials and Methods The facial features are initialized by statistical shape model and then the shape of facial features are deformed iteratively according to the texture pattern which is selected on the feature pool. Results We obtain fast and robust facial feature extraction result with error less than 4% and processing time less than 12 ms. The alignment error is measured by average of ratio of pixel difference to inter-ocular distance. Conclusion The accuracy and processing time of the method is enough to apply facial feature based application and can be used on the face beautification or 3D face modeling.

A New Head Pose Estimation Method based on Boosted 3-D PCA (새로운 Boosted 3-D PCA 기반 Head Pose Estimation 방법)

  • Lee, Kyung-Min;Lin, Chi-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.6
    • /
    • pp.105-109
    • /
    • 2021
  • In this paper, we evaluate Boosted 3-D PCA as a Dataset and evaluate its performance. After that, we will analyze the network features and performance. In this paper, the learning was performed using the 300W-LP data set using the same learning method as Boosted 3-D PCA, and the evaluation was evaluated using the AFLW2000 data set. The results show that the performance is similar to that of the Boosted 3-D PCA paper. This performance result can be learned using the data set of face images freely than the existing Landmark-to-Pose method, so that the poses can be accurately predicted in real-world situations. Since the optimization of the set of key points is not independent, we confirmed the manual that can reduce the computation time. This analysis is expected to be a very important resource for improving the performance of network boosted 3-D PCA or applying it to various application domains.

A Study on Correction and Prevention System of Real-time Forward Head Posture (실시간 거북목 증후군 자세 교정 및 예방 시스템 연구)

  • Woo-Seok Choi;Ji-Mi Choi;Hyun-Min Cho;Jeong-Min Park;Kwang-in Kwak
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.3
    • /
    • pp.147-156
    • /
    • 2024
  • This paper introduces the design of a turtle neck posture correction and prevention system for users of digital devices for a long time. The number of forward head posture patients in Korea increased by 13% from 2018 to 2021, and has not yet improved according to the latest statistics at the present time. Because of the nature of the disease, prevention is more important than treatment. Therefore, in this paper, we designed a system based on built-camera in most laptops to increase the accessiblility of the system, and utilize the features such as Pose Estimation, Face Landmarks Detection, Iris Tracking, and Depth Estimation of Google Mediapipe to prevent the need to produce artificial intelligence models and allow users to easily prevent forward head posture.

Display of Irradiation Location of Ultrasonic Beauty Device Using AR Scheme (증강현실 기법을 이용한 초음파 미용기의 조사 위치 표시)

  • Kang, Moon-Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.9
    • /
    • pp.25-31
    • /
    • 2020
  • In this study, for the safe use of a portable ultrasonic skin-beauty device, an android app was developed to show the irradiation locations of focused ultrasound to a user through augmented reality (AR) and enable stable self-surgery. The utility of the app was assessed through testing. While the user is making a facial treatment with the beauty device, the user's face and the ultrasonic irradiation location on the face are detected in real-time with a smart-phone camera. The irradiation location is then indicated on the face image and shown to the user so that excessive ultrasound is not irradiated to the same area during treatment. To this end, ML-Kit is used to detect the user's face landmarks in real-time, and they are compared with a reference face model to estimate the pose of the face, such as rotation and movement. After mounting a LED on the ultrasonic irradiation part of the device and operating the LED during irradiation, the LED light was searched to find the position of the ultrasonic irradiation on the smart-phone screen, and the irradiation position was registered and displayed on the face image based on the estimated face pose. Each task performed in the app was implemented through the thread and the timer, and all tasks were executed within 75 ms. The test results showed that the time taken to register and display 120 ultrasound irradiation positions was less than 25ms, and the display accuracy was within 20mm when the face did not rotate significantly.

Study of Posture Evaluation Method in Chest PA Examination based on Artificial Intelligence (인공지능 기반 흉부 후전방향 검사에서 자세 평가 방법에 관한 연구)

  • Ho Seong Hwang;Yong Seok Choi;Dae Won Lee;Dong Hyun Kim;Ho Chul Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.3
    • /
    • pp.167-175
    • /
    • 2023
  • Chest PA is the basic examination of radiographic imaging. Moreover, Chest PA's demands are constantly increasing because of the Increase in respiratory diseases. However, it is not meeting the demand due to problems such as a shortage of radiological technologist, sexual shame caused by patient contact, and the spread of infectious diseases. There have been many cases of using artificial intelligence to solve this problem. Therefore, the purpose of this research is to build an artificial intelligence dataset of Chest PA and to find a posture evaluation method. To construct the posture dataset, the posture image is acquired during actual and simulated examination and classified correct and incorrect posture of the patient. And to evaluate the artificial intelligence posture method, a posture estimation algorithm is used to preprocess the dataset and an artificial intelligence classification algorithm is applied. As a result, Chest PA posture dataset is validated with in over 95% accuracy in all artificial intelligence classification and the accuracy is improved through the Top-Down posture estimation algorithm AlphaPose and the classification InceptionV3 algorithm. Based on this, it will be possible to build a non-face-to-face automatic Chest PA examination system using artificial intelligence.