• Title/Summary/Keyword: Facial Detection

Search Result 377, Processing Time 0.027 seconds

Facial Region Tracking in YCbCr Color Coordinates (YCbCr 컬러 영상 변환을 통한 얼굴 영역 자동 검출)

  • Han, M.H.;Kim, K.S.;Yoon, T.H.;Shin, S.W.;Kim, I.Y.
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.63-65
    • /
    • 2005
  • In this study, the automatic face tracking algorithm is proposed by using the color and edge information of a color image. To reduce the effects of variations in the illumination conditions, an acquired CCD color image is first transformed into YCbCr color coordinates, and subsequently the morphological image processing operations, and the elliptical geometric measures are applied to extract the refined facial area.

  • PDF

Real Time Face Detection and Recognition using Rectangular Feature based Classifier and Class Matching Algorithm (사각형 특징 기반 분류기와 클래스 매칭을 이용한 실시간 얼굴 검출 및 인식)

  • Kim, Jong-Min;Kang, Myung-A
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.1
    • /
    • pp.19-26
    • /
    • 2010
  • This paper proposes a classifier based on rectangular feature to detect face in real time. The goal is to realize a strong detection algorithm which satisfies both efficiency in calculation and detection performance. The proposed algorithm consists of the following three stages: Feature creation, classifier study and real time facial domain detection. Feature creation organizes a feature set with the proposed five rectangular features and calculates the feature values efficiently by using SAT (Summed-Area Tables). Classifier learning creates classifiers hierarchically by using the AdaBoost algorithm. In addition, it gets excellent detection performance by applying important face patterns repeatedly at the next level. Real time facial domain detection finds facial domains rapidly and efficiently through the classifier based on the rectangular feature that was created. Also, the recognition rate was improved by using the domain which detected a face domain as the input image and by using PCA and KNN algorithms and a Class to Class rather than the existing Point to Point technique.

Design of Middleware for Face Recognition based on WIPI Platform (WIPI 플랫폼 기반 얼굴인식 미들웨어 설계)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.11 no.3
    • /
    • pp.117-127
    • /
    • 2005
  • Proportionately with a rapid development of mobile instrument technology, the number of mobile contents utilizing computing environment's graphic technology or image processing is increasing. In this paper, I designed a middleware which supports facial detection and recognition system based WIPI(Wireless Internet Platform for Interoperability), the Korean standard mobile platform. The facial recognition middleware introduced the object oriented concepts, to apply to recognition security and other contents by using mobile camera. This can reduce the development time and cost by dividing process while developing software. Therefore, it would be applied to content security or technology transfer with other company. Facial recognition middleware system is composed of face detection module and face recognition module, and proposes the application contents design method based on WIPI platform.

  • PDF

Real-Time Automatic Human Face Detection and Recognition System Using Skin Colors of Face, Face Feature Vectors and Facial Angle Informations (얼굴피부색, 얼굴특징벡터 및 안면각 정보를 이용한 실시간 자동얼굴검출 및 인식시스템)

  • Kim, Yeong-Il;Lee, Eung-Ju
    • The KIPS Transactions:PartB
    • /
    • v.9B no.4
    • /
    • pp.491-500
    • /
    • 2002
  • In this paper, we propose a real-time face detection and recognition system by using skin color informations, geometrical feature vectors of face, and facial angle informations from color face image. The proposed algorithm improved face region extraction efficiency by using skin color informations on the HSI color coordinate and face edge information. And also, it improved face recognition efficiency by using geometrical feature vectors of face and facial angles from the extracted face region image. In the experiment, the proposed algorithm shows more improved recognition efficiency as well as face region extraction efficiency than conventional methods.

A Study on Automatic Detection of The Face and Facial Features for Face Recognition System in Real Time (실시간 얼굴인식 시스템을 위한 얼굴의 위치 및 각 부위 자동 검출에 관한 연구)

  • 구자일;홍준표
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.39 no.4
    • /
    • pp.379-388
    • /
    • 2002
  • In this paper, the real-time algorithm is proposed for automatic detection of the face and facial features. In the face region, we extracted eyes, nose, mouth and so forth. There are two methods to extract them; one is the method of using the location information of them, other is the method of using Gaussian second derivatives filters. This system have high speed and accuracy because the facial feature extraction is processed only by detected face region, not by whole image. There are some kinds of good experimental result for the proposed algorithm; high face detection rate of 95%, high speed of lower than 1sec. the reduction of illumination effect, and the compensation of face tilt.

A Lip Detection Algorithm Using Color Clustering (색상 군집화를 이용한 입술탐지 알고리즘)

  • Jeong, Jongmyeon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.3
    • /
    • pp.37-43
    • /
    • 2014
  • In this paper, we propose a robust lip detection algorithm using color clustering. At first, we adopt AdaBoost algorithm to extract facial region and convert facial region into Lab color space. Because a and b components in Lab color space are known as that they could well express lip color and its complementary color, we use a and b component as the features for color clustering. The nearest neighbour clustering algorithm is applied to separate the skin region from the facial region and K-Means color clustering is applied to extract lip-candidate region. Then geometric characteristics are used to extract final lip region. The proposed algorithm can detect lip region robustly which has been shown by experimental results.

Stress Detection System for Emotional Labor Based On Deep Learning Facial Expression Recognition (감정노동자를 위한 딥러닝 기반의 스트레스 감지시스템의 설계)

  • Og, Yu-Seon;Cho, Woo-hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.613-617
    • /
    • 2021
  • According to the growth of the service industry, stresses from emotional labor workers have been emerging as a social problem, thereby so-called the Emotional Labor Protection Act was implemented in 2018. However, insufficient substantial protection systems for emotional workers emphasizes the necessity of a digital stress management system. Thus, in this paper, we suggest a stress detection system for customer service representatives based on deep learning facial expression recognition. This system consists of a real-time face detection module, an emotion classification FER module that deep-learned big data including Korean emotion images, and a monitoring module that only visualizes stress levels. We designed the system to aim to monitor stress and prevent mental illness in emotional workers.

  • PDF

Face Tracking System Using Updated Skin Color (업데이트된 피부색을 이용한 얼굴 추적 시스템)

  • Ahn, Kyung-Hee;Kim, Jong-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.5
    • /
    • pp.610-619
    • /
    • 2015
  • *In this paper, we propose a real-time face tracking system using an adaptive face detector and a tracking algorithm. An image is divided into the regions of background and face candidate by a real-time updated skin color identifying system in order to accurately detect facial features. The facial characteristics are extracted using the five types of simple Haar-like features. The extracted features are reinterpreted by Principal Component Analysis (PCA), and the interpreted principal components are processed by Support Vector Machine (SVM) that classifies into facial and non-facial areas. The movement of the face is traced by Kalman filter and Mean shift, which use the static information of the detected faces and the differences between previous and current frames. The proposed system identifies the initial skin color and updates it through a real-time color detecting system. A similar background color can be removed by updating the skin color. Also, the performance increases up to 20% when the background color is reduced in comparison to extracting features from the entire region. The increased detection rate and speed are acquired by the usage of Kalman filter and Mean shift.

CREATING JOYFUL DIGESTS BY EXPLOITING SMILE/LAUGHTER FACIAL EXPRESSIONS PRESENT IN VIDEO

  • Kowalik, Uwe;Hidaka, Kota;Irie, Go;Kojima, Akira
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.267-272
    • /
    • 2009
  • Video digests provide an effective way of confirming a video content rapidly due to their very compact form. By watching a digest, users can easily check whether a specific content is worth seeing in full. The impression created by the digest greatly influences the user's choice in selecting video contents. We propose a novel method of automatic digest creation that evokes a joyful impression through the created digest by exploiting smile/laughter facial expressions as emotional cues of joy from video. We assume that a digest presenting smiling/laughing faces appeals to the user since he/she is assured that the smile/laughter expression is caused by joyful events inside the video. For detecting smile/laughter faces we have developed a neural network based method for classifying facial expressions. Video segmentation is performed by automatic shot detection. For creating joyful digests, appropriate shots are automatically selected by shot ranking based on the smile/laughter detection result. We report the results of user trials conducted for assessing the visual impression with automatically created 'joyful' digests produced by our system. The results show that users tend to prefer emotional digests containing laughter faces. This result suggests that the attractiveness of automatically created video digests can be improved by extracting emotional cues of the contents through automatic facial expression analysis as proposed in this paper.

  • PDF

Facial Region Segmentation using Watershed Algorithm based on Depth Information (깊이정보 기반 Watershed 알고리즘을 이용한 얼굴영역 분할)

  • Kim, Jang-Won
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.4 no.4
    • /
    • pp.225-230
    • /
    • 2011
  • In this paper, we propose the segmentation method for detecting the facial region by using watershed based on depth information and merge algorithm. The method consists of three steps: watershed segmentation, seed region detection, and merge. The input color image is segmented into the small uniform regions by watershed. The facial region can be detected by merging the uniform regions with chromaticity and edge constraints. The problem in the existing method using only chromaticity or edge can solved by the proposed method. The computer simulation is performed to evaluate the performance of the proposed method. The simulation results shows that the proposed method is superior to segmentation facial region.