• Title/Summary/Keyword: robust face detection

Search Result 125, Processing Time 0.024 seconds

A Study on Utilizing Smartphone for CMT Object Tracking Method Adapting Face Detection (얼굴 탐지를 적용한 CMT 객체 추적 기법의 스마트폰 활용 연구)

  • Lee, Sang Gu
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.1
    • /
    • pp.588-594
    • /
    • 2021
  • Due to the recent proliferation of video contents, previous contents expressed as the character or the picture are being replaced to video and growth of video contents is being boosted because of emerging new platforms. As this accelerated growth has a great impact on the process of universalization of technology for ordinary people, video production and editing technologies that were classified as expert's areas can be easily accessed and used from ordinary people. Due to the development of these technologies, tasks like that recording and adjusting that depends on human's manual involvement could be automated through object tracking technology. Also, the process for situating the object in the center of the screen after finding the object to record could have been automated. Because the task of setting the object to be tracked is still remaining as human's responsibility, the delay or mistake can be made in the process of setting the object which has to be tracked through a human. Therefore, we propose a novel object tracking technique of CMT combining the face detection technique utilizing Haar cascade classifier. The proposed system can be applied to an effective and robust image tracking system for continuous object tracking on the smartphone in real time.

An Eye Location based Head Posture Recognition Method and Its Application in Mouse Operation

  • Chen, Zhe;Yang, Bingbing;Yin, Fuliang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.3
    • /
    • pp.1087-1104
    • /
    • 2015
  • An eye location based head posture recognition method is proposed in this paper. First, face is detected using skin color method, and eyebrow and eye areas are located based on gray gradient in face. Next, pupil circles are determined using edge detection circle method. Finally, head postures are recognized based on eye location information. The proposed method has high recognition precision and is robust for facial expressions and different head postures, and can be used in mouse operation. The experimental results reveal the validity of proposed method.

Comparison of recognition rate with distance on stereo face images base PCA (PCA기반의 스테레오 얼굴영상에서 거리에 따른 인식률 비교)

  • Park Chang-Han;Namkung Jae-Chan
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.1
    • /
    • pp.9-16
    • /
    • 2005
  • In this paper, we compare face recognition rate by distance change using Principal Component Analysis algorithm being input left and right image in stereo image. Change to YCbCr color space from RGB color space in proposed method and face region does detection. Also, after acquire distance using stereo image extracted face image's extension and reduce do extract robust face region, experimented recognition rate by using PCA algorithm. Could get face recognition rate of 98.61%(30cm), 98.91%(50cm), 99.05%(100cm), 99.90%(120cm), 97.31%(150cm) and 96.71%(200cm) by average recognition result of acquired face image. Therefore, method that is proposed through an experiment showed that can get high recognition rate if apply scale up or reduction according to distance.

Improved Two-Phase Framework for Facial Emotion Recognition

  • Yoon, Hyunjin;Park, Sangwook;Lee, Yongkwi;Han, Mikyong;Jang, Jong-Hyun
    • ETRI Journal
    • /
    • v.37 no.6
    • /
    • pp.1199-1210
    • /
    • 2015
  • Automatic emotion recognition based on facial cues, such as facial action units (AUs), has received huge attention in the last decade due to its wide variety of applications. Current computer-based automated two-phase facial emotion recognition procedures first detect AUs from input images and then infer target emotions from the detected AUs. However, more robust AU detection and AU-to-emotion mapping methods are required to deal with the error accumulation problem inherent in the multiphase scheme. Motivated by our key observation that a single AU detector does not perform equally well for all AUs, we propose a novel two-phase facial emotion recognition framework, where the presence of AUs is detected by group decisions of multiple AU detectors and a target emotion is inferred from the combined AU detection decisions. Our emotion recognition framework consists of three major components - multiple AU detection, AU detection fusion, and AU-to-emotion mapping. The experimental results on two real-world face databases demonstrate an improved performance over the previous two-phase method using a single AU detector in terms of both AU detection accuracy and correct emotion recognition rate.

Stereo-based Robust Human Detection on Pose Variation Using Multiple Oriented 2D Elliptical Filters (방향성 2차원 타원형 필터를 이용한 스테레오 기반 포즈에 강인한 사람 검출)

  • Cho, Sang-Ho;Kim, Tae-Wan;Kim, Dae-Jin
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.10
    • /
    • pp.600-607
    • /
    • 2008
  • This paper proposes a robust human detection method irrespective of their pose variation using the multiple oriented 2D elliptical filters (MO2DEFs). The MO2DEFs can detect the humans regardless of their poses unlike existing object oriented scale adaptive filter (OOSAF). To overcome OOSAF's limitation, we introduce the MO2DEFs whose shapes look like the oriented ellipses. We perform human detection by applying four different 2D elliptical filters with specific orientations to the 2D spatial-depth histogram and then by taking the thresholds over the filtered histograms. In addition, we determine the human pose by using convolution results which are computed by using the MO2DEFs. We verify the human candidates by either detecting the face or matching head-shoulder shapes over the estimated rotation. The experimental results showed that the accuracy of pose angle estimation was about 88%, the human detection using the MO2DEFs outperformed that of using the OOSAF by $15{\sim}20%$ especially in case of the posed human.

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.

Integral Regression Network for Facial Landmark Detection (얼굴 특징점 검출을 위한 적분 회귀 네트워크)

  • Kim, Do Yeop;Chang, Ju Yong
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.564-572
    • /
    • 2019
  • With the development of deep learning, the performance of facial landmark detection methods has been greatly improved. The heat map regression method, which is a representative facial landmark detection method, is widely used as an efficient and robust method. However, the landmark coordinates cannot be directly obtained through a single network, and the accuracy is reduced in determining the landmark coordinates from the heat map. To solve these problems, we propose to combine integral regression with the existing heat map regression method. Through experiments using various datasets, we show that the proposed integral regression network significantly improves the performance of facial landmark detection.

A Robust Approach to Automatic Iris Localization

  • Xu, Chengzhe;Ali, Tauseef;Kim, In-Taek
    • Journal of the Optical Society of Korea
    • /
    • v.13 no.1
    • /
    • pp.116-122
    • /
    • 2009
  • In this paper, a robust method is developed to locate the irises of both eyes. The method doesn't put any restrictions on the background. The method is based on the AdaBoost algorithm for face and eye candidate points detection. Candidate points are tuned such that two candidate points are exactly in the centers of the irises. Mean crossing function and convolution template are proposed to filter out candidate points and select the iris pair. The advantage of using this kind of hybrid method is that AdaBoost is robust to different illumination conditions and backgrounds. The tuning step improves the precision of iris localization while the convolution filter and mean crossing function reliably filter out candidate points and select the iris pair. The proposed structure is evaluated on three public databases, Bern, Yale and BioID. Extensive experimental results verified the robustness and accuracy of the proposed method. Using the Bern database, the performance of the proposed algorithm is also compared with some of the existing methods.

Face Verification System Using Optimum Nonlinear Composite Filter (최적화된 비선형 합성필터를 이용한 얼굴인증 시스템)

  • Lee, Ju-Min;Yeom, Seok-Won;Hong, Seung-Hyun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.3
    • /
    • pp.44-51
    • /
    • 2009
  • This paper addresses a face verification method using the nonlinear composite filter. This face verification process can be simple and speedy because it does not require any reprocessing such as face detection, alignment or cropping. The optimum nonlinear composite filter is derived by minimizing the output energy due to additive noise and an input scene while maintaining the outputs of training images constant. The filter is equipped with the discrimination capability and the robustness to additive noise by minimizing the outputs of the input scene and the noise, respectively. We build the nonlinear composite filter with two training images and compare the filter with the conventional synthetic discriminant function (SDF) filter. The receiver operating characteristics (ROC) curves are presented as a metric for the performance evaluation. According to the experimental results the optimum nonlinear composite filter is shown to be a robust scheme for face verification in low resolution and noise environments.

Feature Detection and Simplification of 3D Face Data with Facial Expressions

  • Kim, Yong-Guk;Kim, Hyeon-Joong;Choi, In-Ho;Kim, Jin-Seo;Choi, Soo-Mi
    • ETRI Journal
    • /
    • v.34 no.5
    • /
    • pp.791-794
    • /
    • 2012
  • We propose an efficient framework to realistically render 3D faces with a reduced set of points. First, a robust active appearance model is presented to detect facial features in the projected faces under different illumination conditions. Then, an adaptive simplification of 3D faces is proposed to reduce the number of points, yet preserve the detected facial features. Finally, the point model is rendered directly, without such additional processing as parameterization of skin texture. This fully automatic framework is very effective in rendering massive facial data on mobile devices.