• Title/Summary/Keyword: 얼굴 방향성 추정

Search Result 10, Processing Time 0.012 seconds

A Hybrid Approach of Efficient Facial Feature Detection and Tracking for Real-time Face Direction Estimation (실시간 얼굴 방향성 추정을 위한 효율적인 얼굴 특성 검출과 추적의 결합방법)

  • Kim, Woonggi;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.117-124
    • /
    • 2013
  • In this paper, we present a new method which efficiently estimates a face direction from a sequences of input video images in real time fashion. For this work, the proposed method performs detecting the facial region and major facial features such as both eyes, nose and mouth by using the Haar-like feature, which is relatively not sensitive against light variation, from the detected facial area. Then, it becomes able to track the feature points from every frame using optical flow in real time fashion, and determine the direction of the face based on the feature points tracked. Further, in order to prevent the erroneously recognizing the false positions of the facial features when if the coordinates of the features are lost during the tracking by using optical flow, the proposed method determines the validity of locations of the facial features using the template matching of detected facial features in real time. Depending on the correlation rate of re-considering the detection of the features by the template matching, the face direction estimation process is divided into detecting the facial features again or tracking features while determining the direction of the face. The template matching initially saves the location information of 4 facial features such as the left and right eye, the end of nose and mouse in facial feature detection phase and reevaluated these information when the similarity measure between the stored information and the traced facial information by optical flow is exceed a certain level of threshold by detecting the new facial features from the input image. The proposed approach automatically combines the phase of detecting facial features and the phase of tracking features reciprocally and enables to estimate face pose stably in a real-time fashion. From the experiment, we can prove that the proposed method efficiently estimates face direction.

Facial Behavior Recognition for Driver's Fatigue Detection (운전자 피로 감지를 위한 얼굴 동작 인식)

  • Park, Ho-Sik;Bae, Cheol-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9C
    • /
    • pp.756-760
    • /
    • 2010
  • This paper is proposed to an novel facial behavior recognition system for driver's fatigue detection. Facial behavior is shown in various facial feature such as head expression, head pose, gaze, wrinkles. But it is very difficult to clearly discriminate a certain behavior by the obtained facial feature. Because, the behavior of a person is complicated and the face representing behavior is vague in providing enough information. The proposed system for facial behavior recognition first performs detection facial feature such as eye tracking, facial feature tracking, furrow detection, head orientation estimation, head motion detection and indicates the obtained feature by AU of FACS. On the basis of the obtained AU, it infers probability each state occur through Bayesian network.

Recognition of Hmm Facial Expressions using Optical Flow of Feature Regions (얼굴 특징영역상의 광류를 이용한 표정 인식)

  • Lee Mi-Ae;Park Ki-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.6
    • /
    • pp.570-579
    • /
    • 2005
  • Facial expression recognition technology that has potentialities for applying various fields is appling on the man-machine interface development, human identification test, and restoration of facial expression by virtual model etc. Using sequential facial images, this study proposes a simpler method for detecting human facial expressions such as happiness, anger, surprise, and sadness. Moreover the proposed method can detect the facial expressions in the conditions of the sequential facial images which is not rigid motion. We identify the determinant face and elements of facial expressions and then estimates the feature regions of the elements by using information about color, size, and position. In the next step, the direction patterns of feature regions of each element are determined by using optical flows estimated gradient methods. Using the direction model proposed by this study, we match each direction patterns. The method identifies a facial expression based on the least minimum score of combination values between direction model and pattern matching for presenting each facial expression. In the experiments, this study verifies the validity of the Proposed methods.

Face Detection Using A Selectively Attentional Hough Transform and Neural Network (선택적 주의집중 Hough 변환과 신경망을 이용한 얼굴 검출)

  • Choi, Il;Seo, Jung-Ik;Chien, Sung-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.93-101
    • /
    • 2004
  • A face boundary can be approximated by an ellipse with five-dimensional parameters. This property allows an ellipse detection algorithm to be adapted to detecting faces. However, the construction of a huge five-dimensional parameter space for a Hough transform is quite unpractical. Accordingly, we Propose a selectively attentional Hough transform method for detecting faces from a symmetric contour in an image. The idea is based on the use of a constant aspect ratio for a face, gradient information, and scan-line-based orientation decomposition, thereby allowing a 5-dimensional problem to be decomposed into a two-dimensional one to compute a center with a specific orientation and an one-dimensional one to estimate a short axis. In addition, a two-point selection constraint using geometric and gradient information is also employed to increase the speed and cope with a cluttered background. After detecting candidate face regions using the proposed Hough transform, a multi-layer perceptron verifier is adopted to reject false positives. The proposed method was found to be relatively fast and promising.

A Study on Real-time Tracking Method of Horizontal Face Position for Optimal 3D T-DMB Content Service (지상파 DMB 단말에서의 3D 컨텐츠 최적 서비스를 위한 경계 정보 기반 실시간 얼굴 수평 위치 추적 방법에 관한 연구)

  • Kang, Seong-Goo;Lee, Sang-Seop;Yi, June-Ho;Kim, Jung-Kyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.88-95
    • /
    • 2011
  • An embedded mobile device mostly has lower computation power than a general purpose computer because of its relatively lower system specifications. Consequently, conventional face tracking and face detection methods, requiring complex algorithms for higher recognition rates, are unsuitable in a mobile environment aiming for real time detection. On the other hand, by applying a real-time tracking and detecting algorithm, we would be able to provide a two-way interactive multimedia service between an user and a mobile device thus providing a far better quality of service in comparison to a one-way service. Therefore it is necessary to develop a real-time face and eye tracking technique optimized to a mobile environment. For this reason, in this paper, we proposes a method of tracking horizontal face position of a user on a T-DMB device for enhancing the quality of 3D DMB content. The proposed method uses the orientation of edges to estimate the left and right boundary of the face, and by the color edge information, the horizontal position and size of face is determined finally to decide the horizontal face. The sobel gradient vector is projected vertically and candidates of face boundaries are selected, and we proposed a smoothing method and a peak-detection method for the precise decision. Because general face detection algorithms use multi-scale feature vectors, the detection time is too long on a mobile environment. However the proposed algorithm which uses the single-scale detection method can detect the face more faster than conventional face detection methods.

Optimal Facial Emotion Feature Analysis Method based on ASM-LK Optical Flow (ASM-LK Optical Flow 기반 최적 얼굴정서 특징분석 기법)

  • Ko, Kwang-Eun;Park, Seung-Min;Park, Jun-Heong;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.512-517
    • /
    • 2011
  • In this paper, we propose an Active Shape Model (ASM) and Lucas-Kanade (LK) optical flow-based feature extraction and analysis method for analyzing the emotional features from facial images. Considering the facial emotion feature regions are described by Facial Action Coding System, we construct the feature-related shape models based on the combination of landmarks and extract the LK optical flow vectors at each landmarks based on the centre pixels of motion vector window. The facial emotion features are modelled by the combination of the optical flow vectors and the emotional states of facial image can be estimated by the probabilistic estimation technique, such as Bayesian classifier. Also, we extract the optimal emotional features that are considered the high correlation between feature points and emotional states by using common spatial pattern (CSP) analysis in order to improvise the operational efficiency and accuracy of emotional feature extraction process.

Efficient Object Selection Algorithm by Detection of Human Activity (행동 탐지 기반의 효율적인 객체 선택 알고리듬)

  • Park, Wang-Bae;Seo, Yung-Ho;Doo, Kyoung-Soo;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.61-69
    • /
    • 2010
  • This paper presents an efficient object selection algorithm by analyzing and detecting of human activity. Generally, when people point any something, they will put a face on the target direction. Therefore, the direction of the face and fingers and was ordered to be connected to a straight line. At first, in order to detect the moving objects from the input frames, we extract the interesting objects in real time using background subtraction. And the judgment of movement is determined by Principal Component Analysis and a designated time period. When user is motionless, we estimate the user's indication by estimation in relation to vector from the head to the hand. Through experiments using the multiple views, we confirm that the proposed algorithm can estimate the movement and indication of user more efficiently.

New Scheme for Smoker Detection (흡연자 검출을 위한 새로운 방법)

  • Lee, Jong-seok;Lee, Hyun-jae;Lee, Dong-kyu;Oh, Seoung-jun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.9
    • /
    • pp.1120-1131
    • /
    • 2016
  • In this paper, we propose a smoker recognition algorithm, detecting smokers in a video sequence in order to prevent fire accidents. We use description-based method in hierarchical approaches to recognize smoker's activity, the algorithm consists of background subtraction, object detection, event search, event judgement. Background subtraction generates slow-motion and fast-motion foreground image from input image using Gaussian mixture model with two different learning-rate. Then, it extracts object locations in the slow-motion image using chain-rule based contour detection. For each object, face is detected by using Haar-like feature and smoke is detected by reflecting frequency and direction of smoke in fast-motion foreground. Hand movements are detected by motion estimation. The algorithm examines the features in a certain interval and infers that whether the object is a smoker. It robustly can detect a smoker among different objects while achieving real-time performance.

A study on the facial palsy patients' use of Western-Korean collaborative treatment: Using Health Insurance Review & Assessment Service-National Patients Sample (얼굴마비 환자의 의·한의 협진 의료이용 연구: 건강보험심사평가원 환자표본 데이터를 이용)

  • Park, Hyo Sung;Uhm, Tae Woong;Kim, Nam Kwen
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.1
    • /
    • pp.75-86
    • /
    • 2017
  • The facial palsy is one of the most common illness in Western-Korean collaborative treatment (hereinafter "collaborative treatment"). The purpose of this study is to analyze facial palsy patients'collaborative treatment use patterns. By analysing the 2014 National Health Insurance Review and Assessment Patient Sample Data (NPS 2014) with the episode of care unit, we have found the following results. First, the collaborative treatment is preferred by patients over 50 years old and female. Second, western medicine mainly focuses on diagnosis and medical examination while korean medicine and collaborative treatment focus more on treatment activity. Third, western medicine showed the shortest treatment period, followed by korean medicine and collaborative treatment. However, the cost of medical treatment per day is highest in western medicine. The analysis of the use patterns of collaborative treatment shown in the study is expected to provide a direction for the development of clinical practice guidelines and the establishment of medical policies in the future.

A Driver's Condition Warning System using Eye Aspect Ratio (눈 영상비를 이용한 운전자 상태 경고 시스템)

  • Shin, Moon-Chang;Lee, Won-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.2
    • /
    • pp.349-356
    • /
    • 2020
  • This paper introduces the implementation of a driver's condition warning system using eye aspect ratio to prevent a car accident. The proposed driver's condition warning system using eye aspect ratio consists of a camera, that is required to detect eyes, the Raspberrypie that processes information on eyes from the camera, buzzer and vibrator, that are required to warn the driver. In order to detect and recognize driver's eyes, the histogram of oriented gradients and face landmark estimation based on deep-learning are used. Initially the system calculates the eye aspect ratio of the driver from 6 coordinates around the eye and then gets each eye aspect ratio values when the eyes are opened and closed. These two different eye aspect ratio values are used to calculate the threshold value that is necessary to determine the eye state. Because the threshold value is adaptively determined according to the driver's eye aspect ratio, the system can use the optimal threshold value to determine the driver's condition. In addition, the system synthesizes an input image from the gray-scaled and LAB model images to operate in low lighting conditions.