• Title/Summary/Keyword: Lip Detection

Search Result 58, Processing Time 0.025 seconds

Lip Detection from Real-time Image (실시간 영상으로부터 입술 검출에 관한 연구)

  • Kim, Jong-Su;Hahn, Sang-Il;Seo, Bo-Kug;Cha, Hyung-Tai
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.11a
    • /
    • pp.125-128
    • /
    • 2009
  • 본 논문에서는 실시간 영상으로부터 입술 영역 검출 방법을 제안한다. 제안하는 방법은 영상으로부터 피부색 범위의 검출을 통하여 불필요한 잡음을 제거한 후 Harr-like 특징을 이용하여 얼굴을 검출한다. 다음 검출된 얼굴 영역으로부터 얼굴의 기하학적 정보를 이용하여 입술 후보 영역을 분리한 후 제안하는 Cb, Cr를 가지고 입술색 범위 검출해 낸다. 최종적으로 검출된 입술색 범위 영역에 Haar-like 특징을 다시 한번 적용하므로써 보다 정확한 입술 영역을 검출해낸다. 본 논문에서 제안한 알고리즘을 실험한 결과 기존의 알고리즘보다 검출률이 높았으며, 적용범위가 더 넓음을 실험을 통해 확인할 수 있었다.

  • PDF

A Study on a New Pre-emphasis Method Using the Short-Term Energy Difference of Speech Signal (음성 신호의 다구간 에너지 차를 이용한 새로운 프리엠퍼시스 방법에 관한 연구)

  • Kim, Dong-Jun;Kim, Ju-Lee
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.50 no.12
    • /
    • pp.590-596
    • /
    • 2001
  • The pre-emphasis is an essential process for speech signal processing. Widely used two methods are the typical method using a fixed value near unity and te optimal method using the autocorrelation ratio of the signal. This study proposes a new pre-emphasis method using the short-term energy difference of speech signal, which can effectively compensate the glottal source characteristics and lip radiation characteristics. Using the proposed pre-emphasis, speech analysis, such as spectrum estimation, formant detection, is performed and the results are compared with those of the conventional two pre-emphasis methods. The speech analysis with 5 single vowels showed that the proposed method enhanced the spectral shapes and gave nearly constant formant frequencies and could escape the overlapping of adjacent two formants. comparison with FFT spectra had verified the above results and showed the accuracy of the proposed method. The computational complexity of the proposed method reduced to about 50% of the optimal method.

  • PDF

Detection of Facial Direction for Automatic Image Arrangement (이미지 자동배치를 위한 얼굴 방향성 검출)

  • 동지연;박지숙;이환용
    • Journal of Information Technology Applications and Management
    • /
    • v.10 no.4
    • /
    • pp.135-147
    • /
    • 2003
  • With the development of multimedia and optical technologies, application systems with facial features hare been increased the interests of researchers, recently. The previous research efforts in face processing mainly use the frontal images in order to recognize human face visually and to extract the facial expression. However, applications, such as image database systems which support queries based on the facial direction and image arrangement systems which place facial images automatically on digital albums, deal with the directional characteristics of a face. In this paper, we propose a method to detect facial directions by using facial features. In the proposed method, the facial trapezoid is defined by detecting points for eyes and a lower lip. Then, the facial direction formula, which calculates the right and left facial direction, is defined by the statistical data about the ratio of the right and left area in facial trapezoids. The proposed method can give an accurate estimate of horizontal rotation of a face within an error tolerance of $\pm1.31$ degree and takes an average execution time of 3.16 sec.

  • PDF

Detection Method of Human Face, Facial Components and Rotation Angle Using Color Value and Partial Template (컬러정보와 부분 템플릿을 이용한 얼굴영역, 요소 및 회전각 검출)

  • Lee, Mi-Ae;Park, Ki-Soo
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.465-472
    • /
    • 2003
  • For an effective pre-treatment process of a face input image, it is necessary to detect each of face components, calculate the face area, and estimate the rotary angle of the face. A proposed method of this study can estimate an robust result under such renditions as some different levels of illumination, variable fate sizes, fate rotation angels, and background color similar to skin color of the face. The first step of the proposed method detects the estimated face area that can be calculated by both adapted skin color Information of the band-wide HSV color coordinate converted from RGB coordinate, and skin color Information using histogram. Using the results of the former processes, we can detect a lip area within an estimated face area. After estimating a rotary angle slope of the lip area along the X axis, the method determines the face shape based on face information. After detecting eyes in face area by matching a partial template which is made with both eyes, we can estimate Y axis rotary angle by calculating the eye´s locations in three dimensional space in the reference of the face area. As a result of the experiment on various face images, the effectuality of proposed algorithm was verified.

Facial Gaze Detection by Estimating Three Dimensional Positional Movements (얼굴의 3차원 위치 및 움직임 추정에 의한 시선 위치 추적)

  • Park, Gang-Ryeong;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.23-35
    • /
    • 2002
  • Gaze detection is to locate the position on a monitor screen where a user is looking. In our work, we implement it with a computer vision system setting a single camera above a monitor and a user moves (rotates and/or translates) his face to gaze at a different position on the monitor. To detect the gaze position, we locate facial region and facial features(both eyes, nostrils and lip corners) automatically in 2D camera images. From the movement of feature points detected in starting images, we can compute the initial 3D positions of those features by camera calibration and parameter estimation algorithm. Then, when a user moves(rotates and/or translates) his face in order to gaze at one position on a monitor, the moved 3D positions of those features can be computed from 3D rotation and translation estimation and affine transform. Finally, the gaze position on a monitor is computed from the normal vector of the plane determined by those moved 3D positions of features. As experimental results, we can obtain the gaze position on a monitor(19inches) and the gaze position accuracy between the computed positions and the real ones is about 2.01 inches of RMS error.

Detection of Facial Direction using Facial Features (얼굴 특징 정보를 이용한 얼굴 방향성 검출)

  • Park Ji-Sook;Dong Ji-Youn
    • Journal of Internet Computing and Services
    • /
    • v.4 no.6
    • /
    • pp.57-67
    • /
    • 2003
  • The recent rapid development of multimedia and optical technologies brings great attention to application systems to process facial Image features. The previous research efforts in facial image processing have been mainly focused on the recognition of human face and facial expression analysis, using front face images. Not much research has been carried out Into image-based detection of face direction. Moreover, the existing approaches to detect face direction, which normally use the sequential Images captured by a single camera, have limitations that the frontal image must be given first before any other images. In this paper, we propose a method to detect face direction by using facial features such as facial trapezoid which is defined by two eyes and the lower lip. Specifically, the proposed method forms a facial direction formula, which is defined with statistical data about the ratio of the right and left area in the facial trapezoid, to identify whether the face is directed toward the right or the left. The proposed method can be effectively used for automatic photo arrangement systems that will often need to set the different left or right margin of a photo according to the face direction of a person in the photo.

  • PDF

Orofacial Thermal Quantitative Sensory Testing (QST): A Study of Healthy Korean Women and Sex Difference

  • Ahn, Sung-Woo;Kim, Ki-Suk
    • Journal of Oral Medicine and Pain
    • /
    • v.40 no.3
    • /
    • pp.96-101
    • /
    • 2015
  • Purpose: Thermal sensory test as an essential part of quantitative sensory testing (QST) has been recognized as a useful tool in the evaluation of the trigeminal nerve function. Normative data in the orofacial region have been reported but the data on differences in the test site, sex and ethnicity are still insufficient. Thus, this study aimed to investigate the normal range of orofacial thermal QST data in the healthy Korean women, and assess sex difference of thermal perception in the orofacial regions. Methods: Thermal QST was conducted on 20 healthy women participants (mean age, 26.4 years; range, 21 to 34 years). The thermal thresholds (cold detection threshold, CDT; warm detection threshold, WDT; cold pain threshold, CPT; and heat pain threshold, HPT) were measured bilaterally at the 5 trigeminal sites (the forehead, cheek, mentum, lower lip and tongue tip). The normative thermal thresholds of women in the orofacial region were evaluated using one-way ANOVA and compared with the previously reported data from age- and site-matched 30 healthy men (mean age, 26.1 years; range, 23 to 32 years) using two-way ANOVA. One experienced operator performed the tests of both sexes and all tests were done in the same condition except the time variability. Results: Women showed significant site differences for the CDT (p<0.001), WDT (p<0.001), and HPT (p=0.047) in the orofacial region. The CDT (p<0.001) and the CPT (p=0.007) presented significant sex difference unlike the WDT and the HPT. Conclusions: The thermal sensory evaluation in the orofacial region should be considered in the context of site and sex and the normative data in this study could be useful for assessment of the sensory abnormalities in the clinical setting.

A Simple Way to Find Face Direction (간단한 얼굴 방향성 검출방법)

  • Park Ji-Sook;Ohm Seong-Yong;Jo Hyun-Hee;Chung Min-Gyo
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.2
    • /
    • pp.234-243
    • /
    • 2006
  • The recent rapid development of HCI and surveillance technologies has brought great interests in application systems to process faces. Much of research efforts in these systems has been primarily focused on such areas as face recognition, facial expression analysis and facial feature extraction. However, not many approaches have been reported toward face direction detection. This paper proposes a method to detect the direction of a face using a facial feature called facial triangle, which is formed by two eyebrows and the lower lip. Specifically, based on the single monocular view of the face, the proposed method introduces very simple formulas to estimate the horizontal or vertical rotation angle of the face. The horizontal rotation angle can be calculated by using a ratio between the areas of left and right facial triangles, while the vertical angle can be obtained from a ratio between the base and height of facial triangle. Experimental results showed that our method makes it possible to obtain the horizontal angle within an error tolerance of ${\pm}1.68^{\circ}$, and that it performs better as the magnitude of the vertical rotation angle increases.

  • PDF

Strain Measurement and Failure Detection of Reinforced Concrete Beams Using Fiber Otpic Michelson Sensors (광섬유 마이켈슨 센서에 의한 RC보의 변형률 측정 및 파손의 검출)

  • Kwon, Il-Bum;Huh, Yong-Hak;Park, Phi-Lip;Kim, Dong-Jin;Lee, Dong-Chun;Hong, Sung-Hyuk;Moon, Hahn-Gue
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.3 no.3
    • /
    • pp.223-236
    • /
    • 1999
  • The need to monitor and undertake remidial works on large structures has greatly increased in recent years due to the appearance of widespread faults in large structures such as bridges and buildings, etc, of 20 or more years of age. The health condition of structures must be monitored continuously to maintenance the structures. In order to do in-situ monitoring, the sensor is necessary to be embedded in the structures. Fiber optic sensors can be embedded in the structures to get the health information in the structures. The fiber sensor was constructed with $3{\times}3$ fiber couplers to sense the multi-point strains and failure instants. The 4 RC (reinforced concrete) beams were made to 2 of A type, 2 of B type beams. These beams were reinforced by the reinforcing bars, and were tested under the flexural loading. The behavior of the beams was simultaneously measured by the fiber optic sensors, electrical strain gages, and LVDT. The states of the beams were interpreted by these all signals. By these experiments, There were verified that the fiber optic sensors could measure the structural strains and failure instants of the RC beams, The fiber sensors were well operated until the failure of the beams. It was shown that the strains of the reinforcing steel bar can be used to monitor the health condition of the beams through the flexural test of RC beams. On the other words, the results were arrived that the two strains in the reinforcing bar measured at the same point can give the information of the structural health status. Also, the failure instants of beams were well detected from the fiber optic filtered signals.

  • PDF

Voice Activity Detection using Motion and Variation of Intensity in The Mouth Region (입술 영역의 움직임과 밝기 변화를 이용한 음성구간 검출 알고리즘 개발)

  • Kim, Gi-Bak;Ryu, Je-Woong;Cho, Nam-Ik
    • Journal of Broadcast Engineering
    • /
    • v.17 no.3
    • /
    • pp.519-528
    • /
    • 2012
  • Voice activity detection (VAD) is generally conducted by extracting features from the acoustic signal and a decision rule. The performance of such VAD algorithms driven by the input acoustic signal highly depends on the acoustic noise. When video signals are available as well, the performance of VAD can be enhanced by using the visual information which is not affected by the acoustic noise. Previous visual VAD algorithms usually use single visual feature to detect the lip activity, such as active appearance models, optical flow or intensity variation. Based on the analysis of the weakness of each feature, we propose to combine intensity change measure and the optical flow in the mouth region, which can compensate for each other's weakness. In order to minimize the computational complexity, we develop simple measures that avoid statistical estimation or modeling. Specifically, the optical flow is the averaged motion vector of some grid regions and the intensity variation is detected by simple thresholding. To extract the mouth region, we propose a simple algorithm which first detects two eyes and uses the profile of intensity to detect the center of mouth. Experiments show that the proposed combination of two simple measures show higher detection rates for the given false positive rate than the methods that use a single feature.