• Title/Summary/Keyword: Facial Detection

Search Result 377, Processing Time 0.031 seconds

A Study on Real-Time Detection System of Facial Region using Color Channel (컬러채널 실시간 복합 얼굴영역 검출 시스템 연구)

  • 송선희;석경휴;정유선;박동석;배철수;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.463-467
    • /
    • 2004
  • 본 논문에서는 컬러정보를 이용하여 외부 조명의 영향에 대응하면서 얼굴 후보영역을 추출하고, 추출된 후보 영역으로부터 다채널 스킨컬러 모델로 특정 정보를 추출하는 검출 기법을 제시한다. 외부 조명에 민감한 스킨컬러 특성을 고려해 색상정보와 광도를 분리할 수 있는 Y $C_{r}$ , $C_{b}$ 색상모델을 이용하며, Green, Blue 채널의 정보를 Gaussian 확률밀도 모델로부터 $C_{b-}$ $C_{g}$ 의 좁은 범위에 분포되어 있는 스킨컬러 영역 밀도를 모델링한다. 또한 얼굴영역에 Region Restricting과 임계값 반복 알고리즘을 사용하여 눈 영역 검출 과정을 보이고, 실시간 복합 얼굴 검출 시스템 조명방식에 의해 결과를 나타낸다.다.

  • PDF

Analyzing the client's emotions and judging the effectiveness of counseling using a YOLO-based facial expression recognizer (YOLO 기반 표정 인식기를 활용한 내담자의 감정 분석 및 상담 효율성 판단)

  • Yoon, Kyung Seob;Kim, Minji
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.477-480
    • /
    • 2021
  • 본 논문에서는 딥러닝 기술을 활용한 객체 검출(object detection) 모델인 YOLO를 기반으로 하는 감정에 따른 표정 인식 시스템을 활용하여 상담 시 보조 도구로 사용하는 방법을 제공한다. 또한, 머신러닝 기술 기반의 툴킷인 dlib 라이브러리를 사용하여 마스크 착용자의 눈 형태 관측을 통한 표정 인식 및 감정 분석의 정확도 상승을 도모하였다. 이 기술은 코로나19의 장기화로 온라인 수업이나 화상회의를 지원하는 플랫폼들이 전성기를 누리고 있는 현시점에서 다양한 분야로 확장할 수 있을 것으로 기대한다.

  • PDF

Vector-based Face Generation using Montage and Shading Method (몽타주 기법과 음영합성 기법을 이용한 벡터기반 얼굴 생성)

  • 박연출;오해석
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.6
    • /
    • pp.817-828
    • /
    • 2004
  • In this paper, we propose vector-based face generation system that uses montage and shading method and preserves designer(artist)'s style. Proposed system generates character's face similar to human face automatically using facial features that extracted from a photograph. In addition, unlike previous face generation system that uses contours, we propose the system is based on color and composes face from facial features and shade extracted from a photograph. Thus, it has advantages that can make more realistic face similar to human face. Since this system is vector-based, the generated character's face has no size limit and constraint. Therefore it is available to transform the shape freely and to apply various facial expressions to 2D face. Moreover, it has distinctiveness with another approaches in point that can keep artist's impression just as it is in result.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Face Detection by Eye Detection with Progressive Thresholding

  • Jung, Ji-Moon;Kim, Tae-Chul;Wie, Eun-Young;Nam, Ki-Gon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1689-1694
    • /
    • 2005
  • Face detection plays an important role in face recognition, video surveillance, and human computer interface. In this paper, we present a face detection system using eye detection with progressive thresholding from a digital camera. The face candidate is detected by using skin color segmentation in the YCbCr color space. The face candidates are verified by detecting the eyes that is located by iterative thresholding and correlation coefficients. Preprocessing includes histogram equalization, log transformation, and gray-scale morphology for the emphasized eyes image. The distance of the eye candidate points generated by the progressive increasing threshold value is employed to extract the facial region. The process of the face detection is repeated by using the increasing threshold value. Experimental results show that more enhanced face detection in real time.

  • PDF

The Reduction Method of Facial Blemishes using Morphological Operation (모폴로지 연산을 이용한 얼굴 잡티 제거 기법)

  • Goo, Eun-jin;Heo, Woo-hyung;Kim, Mi-kyung;Cha, Eui-young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.05a
    • /
    • pp.364-367
    • /
    • 2013
  • In this paper, we propose a method about reducing facial blemishes using Morphological Operation. First, we detect skin region using pixel data of RGB's each channel image. we create histogram of skin region R, G, B channel and save 3 pixel values that are high frequency pixel value in each channel. After than, we find facial blemishes using Black-hat operation. The pixel value of facial blemishes changes average of its pixel value, 8-neighborhood pixel value and high frequency pixel values. And the facial blemishes pixel is blurred with median filter. The result of this test with facial pictures that have facial blemishes, we prove that this system that correct the face skin using reduction facial Blemishes is more efficient method than correct the face skin just using lighting up.

  • PDF

Facial Expression Recognition Using SIFT Descriptor (SIFT 기술자를 이용한 얼굴 표정인식)

  • Kim, Dong-Ju;Lee, Sang-Heon;Sohn, Myoung-Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.2
    • /
    • pp.89-94
    • /
    • 2016
  • This paper proposed a facial expression recognition approach using SIFT feature and SVM classifier. The SIFT was generally employed as feature descriptor at key-points in object recognition fields. However, this paper applied the SIFT descriptor as feature vector for facial expression recognition. In this paper, the facial feature was extracted by applying SIFT descriptor at each sub-block image without key-point detection procedure, and the facial expression recognition was performed using SVM classifier. The performance evaluation was carried out through comparison with binary pattern feature-based approaches such as LBP and LDP, and the CK facial expression database and the JAFFE facial expression database were used in the experiments. From the experimental results, the proposed method using SIFT descriptor showed performance improvements of 6.06% and 3.87% compared to previous approaches for CK database and JAFFE database, respectively.

A New Anchor Shot Detection System for News Video Indexing

  • Lee, Han-Sung;Im, Young-Hee;Park, Joo-Young;Park, Dai-Hee
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.133-138
    • /
    • 2008
  • In this paper, we propose a novel anchor shot detection system, named to MASD (Multi-phase Anchor Shot Detection), which is a core step of the preprocessing process for the news video analysis. The proposed system is composed of four modules and operates sequentially: 1) skin color detection module for reducing the candidate face regions; 2) face detection module for finding the key-frames with a facial data; 3) vector representation module for the key-frame images using a non-negative matrix factorization; 4) one class SVM module for determining the anchor shots using a support vector data description. Besides the qualitative analysis, our experiments validate that the proposed system shows not only the comparable accuracy to the recently developed methods, but also more faster detection rate than those of others.

A Study on Enhancing the Performance of Detecting Lip Feature Points for Facial Expression Recognition Based on AAM (AAM 기반 얼굴 표정 인식을 위한 입술 특징점 검출 성능 향상 연구)

  • Han, Eun-Jung;Kang, Byung-Jun;Park, Kang-Ryoung
    • The KIPS Transactions:PartB
    • /
    • v.16B no.4
    • /
    • pp.299-308
    • /
    • 2009
  • AAM(Active Appearance Model) is an algorithm to extract face feature points with statistical models of shape and texture information based on PCA(Principal Component Analysis). This method is widely used for face recognition, face modeling and expression recognition. However, the detection performance of AAM algorithm is sensitive to initial value and the AAM method has the problem that detection error is increased when an input image is quite different from training data. Especially, the algorithm shows high accuracy in case of closed lips but the detection error is increased in case of opened lips and deformed lips according to the facial expression of user. To solve these problems, we propose the improved AAM algorithm using lip feature points which is extracted based on a new lip detection algorithm. In this paper, we select a searching region based on the face feature points which are detected by AAM algorithm. And lip corner points are extracted by using Canny edge detection and histogram projection method in the selected searching region. Then, lip region is accurately detected by combining color and edge information of lip in the searching region which is adjusted based on the position of the detected lip corners. Based on that, the accuracy and processing speed of lip detection are improved. Experimental results showed that the RMS(Root Mean Square) error of the proposed method was reduced as much as 4.21 pixels compared to that only using AAM algorithm.

Facial Gaze Detection by Estimating Three Dimensional Positional Movements (얼굴의 3차원 위치 및 움직임 추정에 의한 시선 위치 추적)

  • Park, Gang-Ryeong;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.23-35
    • /
    • 2002
  • Gaze detection is to locate the position on a monitor screen where a user is looking. In our work, we implement it with a computer vision system setting a single camera above a monitor and a user moves (rotates and/or translates) his face to gaze at a different position on the monitor. To detect the gaze position, we locate facial region and facial features(both eyes, nostrils and lip corners) automatically in 2D camera images. From the movement of feature points detected in starting images, we can compute the initial 3D positions of those features by camera calibration and parameter estimation algorithm. Then, when a user moves(rotates and/or translates) his face in order to gaze at one position on a monitor, the moved 3D positions of those features can be computed from 3D rotation and translation estimation and affine transform. Finally, the gaze position on a monitor is computed from the normal vector of the plane determined by those moved 3D positions of features. As experimental results, we can obtain the gaze position on a monitor(19inches) and the gaze position accuracy between the computed positions and the real ones is about 2.01 inches of RMS error.