• Title/Summary/Keyword: Face Direction Detection

Search Result 51, Processing Time 0.024 seconds

Detection of Face Direction by Using Inter-Frame Difference

  • Jang, Bongseog;Bae, Sang-Hyun
    • Journal of Integrative Natural Science
    • /
    • v.9 no.2
    • /
    • pp.155-160
    • /
    • 2016
  • Applying image processing techniques to education, the face of the learner is photographed, and expression and movement are detected from video, and the system which estimates degree of concentration of the learner is developed. For one learner, the measuring system is designed in terms of estimating a degree of concentration from direction of line of learner's sight and condition of the eye. In case of multiple learners, it must need to measure each concentration level of all learners in the classroom. But it is inefficient because one camera per each learner is required. In this paper, position in the face region is estimated from video which photographs the learner in the class by the difference between frames within the motion direction. And the system which detects the face direction by the face part detection by template matching is proposed. From the result of the difference between frames in the first image of the video, frontal face detection by Viola-Jones method is performed. Also the direction of the motion which arose in the face region is estimated with the migration length and the face region is tracked. Then the face parts are detected to tracking. Finally, the direction of the face is estimated from the result of face tracking and face parts detection.

Face Detection Using Pixel Direction Code and Look-Up Table Classifier (픽셀 방향코드와 룩업테이블 분류기를 이용한 얼굴 검출)

  • Lim, Kil-Taek;Kang, Hyunwoo;Han, Byung-Gil;Lee, Jong Taek
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.9 no.5
    • /
    • pp.261-268
    • /
    • 2014
  • Face detection is essential to the full automation of face image processing application system such as face recognition, facial expression recognition, age estimation and gender identification. It is found that local image features which includes Haar-like, LBP, and MCT and the Adaboost algorithm for classifier combination are very effective for real time face detection. In this paper, we present a face detection method using local pixel direction code(PDC) feature and lookup table classifiers. The proposed PDC feature is much more effective to dectect the faces than the existing local binary structural features such as MCT and LBP. We found that our method's classification rate as well as detection rate under equal false positive rate are higher than conventional one.

A Directional Perception System based on Human Detection for Public Guide Robots (공공 안내 로봇을 위한 인체 검출 기반의 방향성 감지 시스템)

  • Doh, Tae-Yong;Baek, Jeong-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.5
    • /
    • pp.481-488
    • /
    • 2010
  • Most public guide robots installed in public spots such as exhibition halls and lobbies of department store etc., have poor capability to distinguish the users who require services. As to provide suitable services, public guide robots should have a human detection system that makes it possible to evaluate intention of customers from their movement direction. In this paper, a DPS (Directional Perception System) is realized based on face detection technology. In particular, to catch human movement efficiently and reduce computational time, human detection technology using face rectangle, which is obtained from the human face, is developed. DPS determines which customer needs services of public guide robots by investigating the size and direction of face rectangle. If DPS is adapted, guide service will be provided with more satisfaction and reliability, and power efficiency also can be added up because public guide robots provide services only for the users who expresses their intentions of wanting services explicitly. Finally, through several experiments, the feasibility of the proposed DPS is verified.

Multi-view Human Recognition based on Face and Gait Features Detection

  • Nguyen, Anh Viet;Yu, He Xiao;Shin, Jae-Ho;Park, Sang-Yun;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.12
    • /
    • pp.1676-1687
    • /
    • 2008
  • In this paper, we proposed a new multi-view human recognition method based on face and gait features detection algorithm. For getting the position of moving object, we used the different of two consecutive frames. And then, base on the extracted object, the first important characteristic, walking direction, will be determined by using the contour of head and shoulder region. If this individual appears in camera with frontal direction, we will use the face features for recognition. The face detection technique is based on the combination of skin color and Haar-like feature whereas eigen-images and PCA are used in the recognition stage. In the other case, if the walking direction is frontal view, gait features will be used. To evaluate the effect of this proposed and compare with another method, we also present some simulation results which are performed in indoor and outdoor environment. Experimental result shows that the proposed algorithm has better recognition efficiency than the conventional sing]e view recognition method.

  • PDF

Detection of Facial Direction using Facial Features (얼굴 특징 정보를 이용한 얼굴 방향성 검출)

  • Park Ji-Sook;Dong Ji-Youn
    • Journal of Internet Computing and Services
    • /
    • v.4 no.6
    • /
    • pp.57-67
    • /
    • 2003
  • The recent rapid development of multimedia and optical technologies brings great attention to application systems to process facial Image features. The previous research efforts in facial image processing have been mainly focused on the recognition of human face and facial expression analysis, using front face images. Not much research has been carried out Into image-based detection of face direction. Moreover, the existing approaches to detect face direction, which normally use the sequential Images captured by a single camera, have limitations that the frontal image must be given first before any other images. In this paper, we propose a method to detect face direction by using facial features such as facial trapezoid which is defined by two eyes and the lower lip. Specifically, the proposed method forms a facial direction formula, which is defined with statistical data about the ratio of the right and left area in the facial trapezoid, to identify whether the face is directed toward the right or the left. The proposed method can be effectively used for automatic photo arrangement systems that will often need to set the different left or right margin of a photo according to the face direction of a person in the photo.

  • PDF

A Hybrid Approach of Efficient Facial Feature Detection and Tracking for Real-time Face Direction Estimation (실시간 얼굴 방향성 추정을 위한 효율적인 얼굴 특성 검출과 추적의 결합방법)

  • Kim, Woonggi;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.117-124
    • /
    • 2013
  • In this paper, we present a new method which efficiently estimates a face direction from a sequences of input video images in real time fashion. For this work, the proposed method performs detecting the facial region and major facial features such as both eyes, nose and mouth by using the Haar-like feature, which is relatively not sensitive against light variation, from the detected facial area. Then, it becomes able to track the feature points from every frame using optical flow in real time fashion, and determine the direction of the face based on the feature points tracked. Further, in order to prevent the erroneously recognizing the false positions of the facial features when if the coordinates of the features are lost during the tracking by using optical flow, the proposed method determines the validity of locations of the facial features using the template matching of detected facial features in real time. Depending on the correlation rate of re-considering the detection of the features by the template matching, the face direction estimation process is divided into detecting the facial features again or tracking features while determining the direction of the face. The template matching initially saves the location information of 4 facial features such as the left and right eye, the end of nose and mouse in facial feature detection phase and reevaluated these information when the similarity measure between the stored information and the traced facial information by optical flow is exceed a certain level of threshold by detecting the new facial features from the input image. The proposed approach automatically combines the phase of detecting facial features and the phase of tracking features reciprocally and enables to estimate face pose stably in a real-time fashion. From the experiment, we can prove that the proposed method efficiently estimates face direction.

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.10
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

Detection of Facial Direction for Automatic Image Arrangement (이미지 자동배치를 위한 얼굴 방향성 검출)

  • 동지연;박지숙;이환용
    • Journal of Information Technology Applications and Management
    • /
    • v.10 no.4
    • /
    • pp.135-147
    • /
    • 2003
  • With the development of multimedia and optical technologies, application systems with facial features hare been increased the interests of researchers, recently. The previous research efforts in face processing mainly use the frontal images in order to recognize human face visually and to extract the facial expression. However, applications, such as image database systems which support queries based on the facial direction and image arrangement systems which place facial images automatically on digital albums, deal with the directional characteristics of a face. In this paper, we propose a method to detect facial directions by using facial features. In the proposed method, the facial trapezoid is defined by detecting points for eyes and a lower lip. Then, the facial direction formula, which calculates the right and left facial direction, is defined by the statistical data about the ratio of the right and left area in facial trapezoids. The proposed method can give an accurate estimate of horizontal rotation of a face within an error tolerance of $\pm1.31$ degree and takes an average execution time of 3.16 sec.

  • PDF

A Simple Way to Find Face Direction (간단한 얼굴 방향성 검출방법)

  • Park Ji-Sook;Ohm Seong-Yong;Jo Hyun-Hee;Chung Min-Gyo
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.2
    • /
    • pp.234-243
    • /
    • 2006
  • The recent rapid development of HCI and surveillance technologies has brought great interests in application systems to process faces. Much of research efforts in these systems has been primarily focused on such areas as face recognition, facial expression analysis and facial feature extraction. However, not many approaches have been reported toward face direction detection. This paper proposes a method to detect the direction of a face using a facial feature called facial triangle, which is formed by two eyebrows and the lower lip. Specifically, based on the single monocular view of the face, the proposed method introduces very simple formulas to estimate the horizontal or vertical rotation angle of the face. The horizontal rotation angle can be calculated by using a ratio between the areas of left and right facial triangles, while the vertical angle can be obtained from a ratio between the base and height of facial triangle. Experimental results showed that our method makes it possible to obtain the horizontal angle within an error tolerance of ${\pm}1.68^{\circ}$, and that it performs better as the magnitude of the vertical rotation angle increases.

  • PDF

Inclined Face Detection using JointBoost algorithm (JointBoost 알고리즘을 이용한 기울어진 얼굴 검출)

  • Jung, Youn-Ho;Song, Young-Mo;Ko, Yun-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.5
    • /
    • pp.606-614
    • /
    • 2012
  • Face detection using AdaBoost algorithm is one of the fastest and the most robust face detection algorithm so many improvements or extensions of this method have been proposed. However, almost all previous approaches deal with only frontal face and suffer from limited discriminant capability for inclined face because these methods apply the same features for both frontal and inclined face. Also conventional approaches for detecting inclined face which apply frontal face detecting method to inclined input image or make different detectors for each angle require heavy computational complexity and show low detection rate. In order to overcome this problem, a method for detecting inclined face using JointBoost is proposed in this paper. The computational and sample complexity is reduced by finding common features that can be shared across the classes. Simulation results show that the detection rate of the proposed method is at least 2% higher than that of the conventional AdaBoost method under the learning condition with the same iteration number. Also the proposed method not only detects the existence of a face but also gives information about the inclined direction of the detected face.