• Title/Summary/Keyword: Facial Detection

Search Result 378, Processing Time 0.026 seconds

Robust Face Alignment using Progressive AAM (점진적 AAM을 이용한 강인한 얼굴 윤곽 검출)

  • Kim, Dae-Hwan;Kim, Jae-Min;Cho, Seong-Won;Jang, Yong-Suk;Kim, Boo-Gyoun;Chung, Sun-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.11-20
    • /
    • 2007
  • AAM has been successfully applied to face alignment, but its performance is very sensitive to initial values. In this paper, we propose a face alignment method using progressive AAM. The proposed method consists of two stages; modelling and relation derivation stage and fitting stage. Modelling and relation derivation stage first builds two AAM models; the inner face AAM model and the whole face AAM model and then derive the relation matrix between the inner face AAM model parameter vector and the whole face AAM model parameter vector. The fitting stage is processed progressively in two phases. In the first phase, the proposed method finds the feature parameters for the inner facial feature points of a new face, and then in the second phase it localizes the whole facial feature points of the new face using the initial values estimated utilizing the inner feature parameters obtained in the first phase and the relation matrix obtained in the first stage. Through experiments, it is verified that the proposed progressive AAM-based face alignment method is more robust with respect to pose, and face background than the conventional basic AAM-based face alignment.

A Design on Face Recognition System Based on pRBFNNs by Obtaining Real Time Image (실시간 이미지 획득을 통한 pRBFNNs 기반 얼굴인식 시스템 설계)

  • Oh, Sung-Kwun;Seok, Jin-Wook;Kim, Ki-Sang;Kim, Hyun-Ki
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.12
    • /
    • pp.1150-1158
    • /
    • 2010
  • In this study, the Polynomial-based Radial Basis Function Neural Networks is proposed as one of the recognition part of overall face recognition system that consists of two parts such as the preprocessing part and recognition part. The design methodology and procedure of the proposed pRBFNNs are presented to obtain the solution to high-dimensional pattern recognition problem. First, in preprocessing part, we use a CCD camera to obtain a picture frame in real-time. By using histogram equalization method, we can partially enhance the distorted image influenced by natural as well as artificial illumination. We use an AdaBoost algorithm proposed by Viola and Jones, which is exploited for the detection of facial image area between face and non-facial image area. As the feature extraction algorithm, PCA method is used. In this study, the PCA method, which is a feature extraction algorithm, is used to carry out the dimension reduction of facial image area formed by high-dimensional information. Secondly, we use pRBFNNs to identify the ID by recognizing unique pattern of each person. The proposed pRBFNNs architecture consists of three functional modules such as the condition part, the conclusion part, and the inference part as fuzzy rules formed in 'If-then' format. In the condition part of fuzzy rules, input space is partitioned with Fuzzy C-Means clustering. In the conclusion part of rules, the connection weight of pRBFNNs is represented as three kinds of polynomials such as constant, linear, and quadratic. Coefficients of connection weight identified with back-propagation using gradient descent method. The output of pRBFNNs model is obtained by fuzzy inference method in the inference part of fuzzy rules. The essential design parameters (including learning rate, momentum coefficient and fuzzification coefficient) of the networks are optimized by means of the Particle Swarm Optimization. The proposed pRBFNNs are applied to real-time face recognition system and then demonstrated from the viewpoint of output performance and recognition rate.

Toward an integrated model of emotion recognition methods based on reviews of previous work (정서 재인 방법 고찰을 통한 통합적 모델 모색에 관한 연구)

  • Park, Mi-Sook;Park, Ji-Eun;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.14 no.1
    • /
    • pp.101-116
    • /
    • 2011
  • Current researches on emotion detection classify emotions by using the information from facial, vocal, and bodily expressions, or physiological responses. This study was to review three representative emotion recognition methods, which were based on psychological theory of emotion. Firstly, literature review on the emotion recognition methods based on facial expressions was done. These studies were supported by Darwin's theory. Secondly, review on the emotion recognition methods based on changes in physiology was conducted. These researches were relied on James' theory. Lastly, a review on the emotion recognition was conducted on the basis of multimodality(i.e., combination of signals from face, dialogue, posture, or peripheral nervous system). These studies were supported by both Darwin's and James' theories. In each part, research findings was examined as well as theoretical backgrounds which each method was relied on. This review proposed a need for an integrated model of emotion recognition methods to evolve the way of emotion recognition. The integrated model suggests that emotion recognition methods are needed to include other physiological signals such as brain responses or face temperature. Also, the integrated model proposed that emotion recognition methods are needed to be based on multidimensional model and take consideration of cognitive appraisal factors during emotional experience.

  • PDF

Development of Recognition Application of Facial Expression for Laughter Theraphy on Smartphone (스마트폰에서 웃음 치료를 위한 표정인식 애플리케이션 개발)

  • Kang, Sun-Kyung;Li, Yu-Jie;Song, Won-Chang;Kim, Young-Un;Jung, Sung-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.4
    • /
    • pp.494-503
    • /
    • 2011
  • In this paper, we propose a recognition application of facial expression for laughter theraphy on smartphone. It detects face region by using AdaBoost face detection algorithm from the front camera image of a smartphone. After detecting the face image, it detects the lip region from the detected face image. From the next frame, it doesn't detect the face image but tracks the lip region which were detected in the previous frame by using the three step block matching algorithm. The size of the detected lip image varies according to the distance between camera and user. So, it scales the detected lip image with a fixed size. After that, it minimizes the effect of illumination variation by applying the bilateral symmetry and histogram matching illumination normalization. After that, it computes lip eigen vector by using PCA(Principal Component Analysis) and recognizes laughter expression by using a multilayer perceptron artificial network. The experiment results show that the proposed method could deal with 16.7 frame/s and the proposed illumination normalization method could reduce the variations of illumination better than the existing methods for better recognition performance.

Adaptive Skin Color Segmentation in a Single Image using Image Feedback (영상 피드백을 이용한 단일 영상에서의 적응적 피부색 검출)

  • Do, Jun-Hyeong;Kim, Keun-Ho;Kim, Jong-Yeol
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.3
    • /
    • pp.112-118
    • /
    • 2009
  • Skin color segmentation techniques have been widely utilized for face/hand detection and tracking in many applications such as a diagnosis system using facial information, human-robot interaction, an image retrieval system. In case of a video image, it is common that the skin color model for a target is updated every frame for the robust target tracking against illumination change. As for a single image, however, most of studies employ a fixed skin color model which may result in low detection rate or high false positive errors. In this paper, we propose a novel method for effective skin color segmentation in a single image, which modifies the conditions for skin color segmentation iteratively by the image feedback of segmented skin color region in a given image.

Technical and Managerial Requirements for Privacy Protection Using Face Detection and Recognition in CCTV Systems (영상감시 시스템에서의 얼굴 영상 정보보호를 위한 기술적·관리적 요구사항)

  • Shin, Yong-Nyuo;Chun, Myung Geun
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.24 no.1
    • /
    • pp.97-106
    • /
    • 2014
  • CCTV(Closed Circuit television) is one of the widely used physical security technologies and video acquisition device installed at specific point with various purposes. Recently, as the CCTV capabilities improve, facial recognition from the information collected from CCTV video is under development. However, in case these technologies are exploited, concerns on major privacy infringement are high. Especially, a computer connected to a particular space images taken by the camera in real time over the Internet has emerged to show information services. In the privacy law, safety measures which is related with biometric template are notified. Accordingly, in this paper, for the protection of privacy video information in the video surveillance system, the technical and managerial requirements for video information security are suggested.

Effective Eye Detection for Face Recognition to Protect Medical Information (의료정보 보호를 위해 얼굴인식에 필요한 효과적인 시선 검출)

  • Kim, Suk-Il;Seok, Gyeong-Hyu
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.5
    • /
    • pp.923-932
    • /
    • 2017
  • In this paper, we propose a GRNN(: Generalized Regression Neural Network) algorithms for new eyes and face recognition identification system to solve the points that need corrective action in accordance with the existing problems of facial movements gaze upon it difficult to identify the user and. Using a Kalman filter structural information elements of a face feature to determine the authenticity of the face was estimated future location using the location information of the current head and the treatment time is relatively fast horizontal and vertical elements of the face using a histogram analysis the detected. And the light obtained by configuring the infrared illuminator pupil effects in real-time detection of the pupil, the pupil tracking was to extract the text print vector. The abstract is to be in fully-justified italicized text as it is here, below the author information.

Effective Detection of Target Region Using a Machine Learning Algorithm (기계 학습 알고리즘을 이용한 효과적인 대상 영역 분할)

  • Jang, Seok-Woo;Lee, Gyungju;Jung, Myunghee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.5
    • /
    • pp.697-704
    • /
    • 2018
  • Since the face in image content corresponds to individual information that can distinguish a specific person from other people, it is important to accurately detect faces not hidden in an image. In this paper, we propose a method to accurately detect a face from input images using a deep learning algorithm, which is one of the machine learning methods. In the proposed method, image input via the red-green-blue (RGB) color model is first changed to the luminance-chroma: blue-chroma: red-chroma ($YC_bC_r$) color model; then, other regions are removed using the learned skin color model, and only the skin regions are segmented. A CNN model-based deep learning algorithm is then applied to robustly detect only the face region from the input image. Experimental results show that the proposed method more efficiently segments facial regions from input images. The proposed face area-detection method is expected to be useful in practical applications related to multimedia and shape recognition.

Deep learning based face mask recognition for access control (출입 통제에 활용 가능한 딥러닝 기반 마스크 착용 판별)

  • Lee, Seung Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.8
    • /
    • pp.395-400
    • /
    • 2020
  • Coronavirus disease 2019 (COVID-19) was identified in December 2019 in China and has spread globally, resulting in an ongoing pandemic. Because COVID-19 is spread mainly from person to person, every person is required to wear a facemask in public. On the other hand, many people are still not wearing facemasks despite official advice. This paper proposes a method to predict whether a human subject is wearing a facemask or not. In the proposed method, two eye regions are detected, and the mask region (i.e., face regions below two eyes) is predicted and extracted based on the two eye locations. For more accurate extraction of the mask region, the facial region was aligned by rotating it such that the line connecting the two eye centers was horizontal. The mask region extracted from the aligned face was fed into a convolutional neural network (CNN), producing the classification result (with or without a mask). The experimental result on 186 test images showed that the proposed method achieves a very high accuracy of 98.4%.

Robust Face and Facial Feature Tracking in Image Sequences (연속 영상에서 강인한 얼굴 및 얼굴 특징 추적)

  • Jang, Kyung-Shik;Lee, Chan-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.9
    • /
    • pp.1972-1978
    • /
    • 2010
  • AAM(Active Appearance Model) is one of the most effective ways to detect deformable 2D objects and is a kind of mathematical optimization methods. The cost function is a convex function because it is a least-square function, but the search space is not convex space so it is not guaranteed that a local minimum is the optimal solution. That is, if the initial value does not depart from around the global minimum, it converges to a local minimum, so it is difficult to detect face contour correctly. In this study, an AAM-based face tracking algorithm is proposed, which is robust to various lighting conditions and backgrounds. Eye detection is performed using SIFT and Genetic algorithm, the information of eye are used for AAM's initial matching information. Through experiments, it is verified that the proposed AAM-based face tracking method is more robust with respect to pose and background of face than the conventional basic AAM-based face tracking method.