• 제목/요약/키워드: face pose

검색결과 188건 처리시간 0.028초

Human Face Tracking and Modeling using Active Appearance Model with Motion Estimation

  • Tran, Hong Tai;Na, In Seop;Kim, Young Chul;Kim, Soo Hyung
    • 스마트미디어저널
    • /
    • 제6권3호
    • /
    • pp.49-56
    • /
    • 2017
  • Images and Videos that include the human face contain a lot of information. Therefore, accurately extracting human face is a very important issue in the field of computer vision. However, in real life, human faces have various shapes and textures. To adapt to these variations, A model-based approach is one of the best ways in which unknown data can be represented by the model in which it is built. However, the model-based approach has its weaknesses when the motion between two frames is big, it can be either a sudden change of pose or moving with fast speed. In this paper, we propose an enhanced human face-tracking model. This approach included human face detection and motion estimation using Cascaded Convolutional Neural Networks, and continuous human face tracking and modeling correction steps using the Active Appearance Model. A proposed system detects human face in the first input frame and initializes the models. On later frames, Cascaded CNN face detection is used to estimate the target motion such as location or pose before applying the old model and fit new target.

A Novel Multi-view Face Detection Method Based on Improved Real Adaboost Algorithm

  • Xu, Wenkai;Lee, Eung-Joo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권11호
    • /
    • pp.2720-2736
    • /
    • 2013
  • Multi-view face detection has become an active area for research in the last few years. In this paper, a novel multi-view human face detection algorithm based on improved real Adaboost is presented. Real Adaboost algorithm is improved by weighted combination of weak classifiers and the approximately best combination coefficients are obtained. After that, we proved that the function of sample weight adjusting method and weak classifier training method is to guarantee the independence of weak classifiers. A coarse-to-fine hierarchical face detector combining the high efficiency of Haar feature with pose estimation phase based on our real Adaboost algorithm is proposed. This algorithm reduces training time cost greatly compared with classical real Adaboost algorithm. In addition, it speeds up strong classifier converging and reduces the number of weak classifiers. For frontal face detection, the experiments on MIT+CMU frontal face test set result a 96.4% correct rate with 528 false alarms; for multi-view face in real time test set result a 94.7 % correct rate. The experimental results verified the effectiveness of the proposed approach.

원근투영법 기반의 PTZ 카메라를 이용한 머리자세 추정 (Head Pose Estimation Based on Perspective Projection Using PTZ Camera)

  • 김진서;이경주;김계영
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제7권7호
    • /
    • pp.267-274
    • /
    • 2018
  • 본 논문에서는 PTZ 카메라를 이용한 머리자세추정 방법에 대하여 서술한다. 회전 또는 이동에 의하여 카메라의 외부인자가 변경되면, 추정된 얼굴자세도 변한다. 본 논문에는 PTZ 카메라의 회전과 위치 변화에 독립적으로 머리자세를 추정하는 새로운 방법을 제안한다. 제안하는 방법은 얼굴검출, 특징추출 그리고 자세추정으로 이루어진다. 얼굴검출은 MCT특징을 이용해 검출하고, 얼굴 특징추출은 회귀트리 방법을 이용해 추출하고, 머리자세 추정은 POSIT 알고리즘을 사용한다. 기존의 POSIT 알고리즘은 카메라의 회전을 고려하지 않지만, 카메라의 외부인자 변화에도 강건하게 머리자세를 추정하기 위하여 본 논문은 원근투영법에 기반하여 POSIT를 개선한다. 실험을 통하여 본 논문에서 제안하는 방법이 기존의 방법 보다 RMSE가 약 $0.6^{\circ}$ 개선되는 것을 확인했다.

홈 트레이닝을 위한 운동 동작 분류 및 교정 시스템 (Pose Classification and Correction System for At-home Workouts)

  • 강재민;박성수;김윤수;감진규
    • 한국정보통신학회논문지
    • /
    • 제25권9호
    • /
    • pp.1183-1189
    • /
    • 2021
  • 홈 트레이닝을 하는 사람들은 전문적인 대면 지도가 없기 때문에 잘못된 자세로 동작을 하여 신체에 무리가 올 수 있다. 본 연구에서는 자세 예측 모델과 다층 퍼셉트론을 이용하여 사용자의 자세를 교정 해주는 "영상 데이터 기반 동작 분류 및 자세 교정 시스템"을 제안한다. 자세 예측 모델로 뼈대 정보를 예측한 후 심층 신경망을 이용하여 어떤 운동 동작인지를 분류한 뒤, 올바른 관절의 각도를 알려주며 교정이 이루어진다. 이 과정에서 동작 분류 모델의 성능을 향상시키기 위해 연속적인 프레임들의 결과를 고려하는 투표 알고리즘을 적용하였다. 다층 퍼셉트론 기반 모델을 자세 분류 모델로 사용했을 때 0.9의 정확도를 가진다. 그리고 투표 알고리즘을 통해 분류 모델의 정확도는 0.93으로 향상된다.

Automatic Camera Pose Determination from a Single Face Image

  • Wei, Li;Lee, Eung-Joo;Ok, Soo-Yol;Bae, Sung-Ho;Lee, Suk-Hwan;Choo, Young-Yeol;Kwon, Ki-Ryong
    • 한국멀티미디어학회논문지
    • /
    • 제10권12호
    • /
    • pp.1566-1576
    • /
    • 2007
  • Camera pose information from 2D face image is very important for making virtual 3D face model synchronize with the real face. It is also very important for any other uses such as: human computer interface, 3D object estimation, automatic camera control etc. In this paper, we have presented a camera position determination algorithm from a single 2D face image using the relationship between mouth position information and face region boundary information. Our algorithm first corrects the color bias by a lighting compensation algorithm, then we nonlinearly transformed the image into $YC_bC_r$ color space and use the visible chrominance feature of face in this color space to detect human face region. And then for face candidate, use the nearly reversed relationship information between $C_b\;and\;C_r$ cluster of face feature to detect mouth position. And then we use the geometrical relationship between mouth position information and face region boundary information to determine rotation angles in both x-axis and y-axis of camera position and use the relationship between face region size information and Camera-Face distance information to determine the camera-face distance. Experimental results demonstrate the validity of our algorithm and the correct determination rate is accredited for applying it into practice.

  • PDF

3차원 얼굴인식 모델에 관한 연구: 모델 구조 비교연구 및 해석 (A Study On Three-dimensional Optimized Face Recognition Model : Comparative Studies and Analysis of Model Architectures)

  • 박찬준;오성권;김진율
    • 전기학회논문지
    • /
    • 제64권6호
    • /
    • pp.900-911
    • /
    • 2015
  • In this paper, 3D face recognition model is designed by using Polynomial based RBFNN(Radial Basis Function Neural Network) and PNN(Polynomial Neural Network). Also recognition rate is performed by this model. In existing 2D face recognition model, the degradation of recognition rate may occur in external environments such as face features using a brightness of the video. So 3D face recognition is performed by using 3D scanner for improving disadvantage of 2D face recognition. In the preprocessing part, obtained 3D face images for the variation of each pose are changed as front image by using pose compensation. The depth data of face image shape is extracted by using Multiple point signature. And whole area of face depth information is obtained by using the tip of a nose as a reference point. Parameter optimization is carried out with the aid of both ABC(Artificial Bee Colony) and PSO(Particle Swarm Optimization) for effective training and recognition. Experimental data for face recognition is built up by the face images of students and researchers in IC&CI Lab of Suwon University. By using the images of 3D face extracted in IC&CI Lab. the performance of 3D face recognition is evaluated and compared according to two types of models as well as point signature method based on two kinds of depth data information.

ASMs을 이용한 특징점 추출에 기반한 3D 얼굴데이터의 정렬 및 정규화 : 정렬 과정에 대한 정량적 분석 (3D Face Alignment and Normalization Based on Feature Detection Using Active Shape Models : Quantitative Analysis on Aligning Process)

  • 신동원;박상준;고재필
    • 한국CDE학회논문집
    • /
    • 제13권6호
    • /
    • pp.403-411
    • /
    • 2008
  • The alignment of facial images is crucial for 2D face recognition. This is the same to facial meshes for 3D face recognition. Most of the 3D face recognition methods refer to 3D alignment but do not describe their approaches in details. In this paper, we focus on describing an automatic 3D alignment in viewpoint of quantitative analysis. This paper presents a framework of 3D face alignment and normalization based on feature points obtained by Active Shape Models (ASMs). The positions of eyes and mouth can give possibility of aligning the 3D face exactly in three-dimension space. The rotational transform on each axis is defined with respect to the reference position. In aligning process, the rotational transform converts an input 3D faces with large pose variations to the reference frontal view. The part of face is flopped from the aligned face using the sphere region centered at the nose tip of 3D face. The cropped face is shifted and brought into the frame with specified size for normalizing. Subsequently, the interpolation is carried to the face for sampling at equal interval and filling holes. The color interpolation is also carried at the same interval. The outputs are normalized 2D and 3D face which can be used for face recognition. Finally, we carry two sets of experiments to measure aligning errors and evaluate the performance of suggested process.

주행 로봇을 위한 단일 카메라 영상에서 손든 자세 검출 알고리즘 (Hand Raising Pose Detection in the Images of a Single Camera for Mobile Robot)

  • 권기일
    • 로봇학회논문지
    • /
    • 제10권4호
    • /
    • pp.223-229
    • /
    • 2015
  • This paper proposes a novel method for detection of hand raising poses from images acquired from a single camera attached to a mobile robot that navigates unknown dynamic environments. Due to unconstrained illumination, a high level of variance in human appearances and unpredictable backgrounds, detecting hand raising gestures from an image acquired from a camera attached to a mobile robot is very challenging. The proposed method first detects faces to determine the region of interest (ROI), and in this ROI, we detect hands by using a HOG-based hand detector. By using the color distribution of the face region, we evaluate each candidate in the detected hand region. To deal with cases of failure in face detection, we also use a HOG-based hand raising pose detector. Unlike other hand raising pose detector systems, we evaluate our algorithm with images acquired from the camera and images obtained from the Internet that contain unknown backgrounds and unconstrained illumination. The level of variance in hand raising poses in these images is very high. Our experiment results show that the proposed method robustly detects hand raising poses in complex backgrounds and unknown lighting conditions.

비강압적 홍채 인식을 위한 전 방향 카메라에서의 다각도 얼굴 검출 (Multi-views face detection in Omni-directional camera for non-intrusive iris recognition)

  • 이현수;배광혁;김재희;박강령
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 컴퓨터소사이어티 추계학술대회논문집
    • /
    • pp.115-118
    • /
    • 2003
  • This paper describes a system of detecting multi-views faces and estimating their face poses in an omni-directional camera environment for non-intrusive iris recognition. The paper is divided into two parts; First, moving region is identified by using difference-image information. Then this region is analyzed with face-color information to find the face candidate region. Second part is applying PCA (Principal Component Analysis) to detect multi-view faces, to estimate face pose.

  • PDF

증강현실 기법을 이용한 초음파 미용기의 조사 위치 표시 (Display of Irradiation Location of Ultrasonic Beauty Device Using AR Scheme)

  • 강문호
    • 한국산학기술학회논문지
    • /
    • 제21권9호
    • /
    • pp.25-31
    • /
    • 2020
  • 본 연구에서는 휴대용 초음파 피부 미용기 사용시 집속 초음파의 조사 위치를 증강현실 (Augmented Reality: AR) 기법을 통해 사용자에게 보여주어 안전하게 셀프시술을 하도록 하는 안드로이드 앱을 개발하고 시험을 통해 유용성을 보인다. 사용자가 초음파 미용기로 얼굴 부위를 시술하는 동안에 스마트폰 카메라를 통해 사용자의 얼굴과 얼굴위의 초음파 조사 위치를 실시간 검지한 후, 얼굴 영상위에 조사 위치를 표시하여 사용자에게 보여줌으로서, 초음파가 동일한 부위에 과도하게 중복되어 조사되지 않도록 한다. 이를 위해 ML-Kit를 이용하여 사용자 얼굴의 랜드마크(landmark)들을 실시간 검지하고 얼굴형상 기준 모델과 비교하여 얼굴의 회전과 이동 등의 자세를 추정한다. 미용기의 초음파 조사부위에 LED를 장착하고 조사 중에 작동시킨 후, LED의 불빛의 위치를 탐색하여 스마트폰 화면상의 초음파 조사 위치를 알아내고, 추정된 자세정보를 토대로 얼굴 영상위에 조사 위치를 정합시켜 표시한다. 앱에서 수행되는 각 작업들을 스레드와 타이머를 통해 구현하여 전체 작업이 75ms 이내에서 실행된다. 시험 결과, 120개의 초음파 조사 위치를 정합하고 표시하는 데 걸린 시간은 25ms 이하이고, 얼굴이 크게 회전하지 않는 경우 표시 정확도는 20mm 이내 임을 알 수 있다.