• Title/Summary/Keyword: robust face detection

Search Result 125, Processing Time 0.02 seconds

Performance of Human Skin Detection in Images According to Color Spaces

  • Kim, Jun-Yup;Do, Yong-Tae
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.153-156
    • /
    • 2005
  • Skin region detection in images is an important process in many computer vision applications targeting humans such as hand gesture recognition and face identification. It usually starts at a pixel-level, and involves a pre-process of color spae transformation followed by a classification process. A color space transformation is assumed to increase separability between skin classes and other classes, to increase similarity among different skin tones, and to bring a robust performance under varying imaging conditions, without any complicated analysis. In this paper, we examine if the color space transformation actually brings those benefits to the problem of skin region detection on a set of human hand images with different postures, backgrounds, people, and illuminations. Our experimental results indicate that color space transfomation affects the skin detection performance. Although the performance depends on camera and surround conditions, normalized [R, G, B] color space may be a good choice in general.

  • PDF

A review on robust principal component analysis (강건 주성분분석에 대한 요약)

  • Lee, Eunju;Park, Mingyu;Kim, Choongrak
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.2
    • /
    • pp.327-333
    • /
    • 2022
  • Principal component analysis (PCA) is the most widely used technique in dimension reduction, however, it is very sensitive to outliers. A robust version of PCA, called robust PCA, was suggested by two seminal papers by Candès et al. (2011) and Chandrasekaran et al. (2011). The robust PCA is an essential tool in the artificial intelligence such as background detection, face recognition, ranking, and collaborative filtering. Also, the robust PCA receives a lot of attention in statistics in addition to computer science. In this paper, we introduce recent algorithms for the robust PCA and give some illustrative examples.

Implementation of Driver Fatigue Monitoring System (운전자 졸음 인식 시스템 구현)

  • Choi, Jin-Mo;Song, Hyok;Park, Sang-Hyun;Lee, Chul-Dong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.8C
    • /
    • pp.711-720
    • /
    • 2012
  • In this paper, we introduce the implementation of driver fatigue monitering system and its result. Input video device is selected commercially available web-cam camera. Haar transform is used to face detection and adopted illumination normalization is used for arbitrary illumination conditions. Facial image through illumination normalization is extracted using Haar face features easily. Eye candidate area through illumination normalization can be reduced by anthropometric measurement and eye detection is performed by PCA and Circle Mask mixture model. This methods achieve robust eye detection on arbitrary illumination changing conditions. Drowsiness state is determined by the level on illumination normalize eye images by a simple calculation. Our system alarms and operates seatbelt on vibration through controller area network(CAN) when the driver's doze level is detected. Our algorithm is implemented with low computation complexity and high recognition rate. We achieve 97% of correct detection rate through in-car environment experiments.

Real-time Face Tracking Method using Improved CamShift (향상된 캠쉬프트를 사용한 실시간 얼굴추적 방법)

  • Lee, Jun-Hwan;Yoo, Jisang
    • Journal of Broadcast Engineering
    • /
    • v.21 no.6
    • /
    • pp.861-877
    • /
    • 2016
  • This paper first discusses the disadvantages of the existing CamShift Algorithm for real time face tracking, and then proposes a new Camshift Algorithm that performs better than the existing algorithm. The existing CamShift Algorithm shows unstable tracking when tracing similar colors in the background of objects. This drawback of the existing CamShift is resolved by using Kinect’s pixel-by-pixel depth information and the Skin Detection algorithm to extract candidate skin regions based on HSV color space. Additionally, even when the tracking object is not found, or when occlusion occurs, the feature point-based matching algorithm makes it robust to occlusion. By applying the improved CamShift algorithm to face tracking, the proposed real-time face tracking algorithm can be applied to various fields. The results from the experiment prove that the proposed algorithm is superior in tracking performance to that of existing TLD tracking algorithm, and offers faster processing speed. Also, while the proposed algorithm has a slower processing speed than CamShift, it overcomes all the existing shortfalls of the existing CamShift.

Face Detection in Color images (컬러이미지에서의 얼굴검출)

  • 박동희;박호식;남기환;한준희;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.10a
    • /
    • pp.236-238
    • /
    • 2003
  • Human face detection is often the first step in applications such as video surveillance, human computer interface, fare recognition, and image database management. We have constructed a simple and fast system to detect frontal human faces in complex environment and different illumination. This paper presents a fast segmentation method to combine neighboring pixels with similar hue. The algorithm constructs eye, mouth, and boundary maps for verifying each fare candidate. We test the system on images in complex environment and with confusing objects. The experiment shows a robust detection result with few false detected fates.

  • PDF

Robust Face Alignment using Progressive AAM (점진적 AAM을 이용한 강인한 얼굴 윤곽 검출)

  • Kim, Dae-Hwan;Kim, Jae-Min;Cho, Seong-Won;Jang, Yong-Suk;Kim, Boo-Gyoun;Chung, Sun-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.11-20
    • /
    • 2007
  • AAM has been successfully applied to face alignment, but its performance is very sensitive to initial values. In this paper, we propose a face alignment method using progressive AAM. The proposed method consists of two stages; modelling and relation derivation stage and fitting stage. Modelling and relation derivation stage first builds two AAM models; the inner face AAM model and the whole face AAM model and then derive the relation matrix between the inner face AAM model parameter vector and the whole face AAM model parameter vector. The fitting stage is processed progressively in two phases. In the first phase, the proposed method finds the feature parameters for the inner facial feature points of a new face, and then in the second phase it localizes the whole facial feature points of the new face using the initial values estimated utilizing the inner feature parameters obtained in the first phase and the relation matrix obtained in the first stage. Through experiments, it is verified that the proposed progressive AAM-based face alignment method is more robust with respect to pose, and face background than the conventional basic AAM-based face alignment.

A Study On Face Feature Points Using Active Discrete Wavelet Transform (Active Discrete Wavelet Transform를 이용한 얼굴 특징 점 추출)

  • Chun, Soon-Yong;Zijing, Qian;Ji, Un-Ho
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.1
    • /
    • pp.7-16
    • /
    • 2010
  • Face recognition of face images is an active subject in the area of computer pattern recognition, which has a wide range of potential. Automatic extraction of face image of the feature points is an important step during automatic face recognition. Whether correctly extract the facial feature has a direct influence to the face recognition. In this paper, a new method of facial feature extraction based on Discrete Wavelet Transform is proposed. Firstly, get the face image by using PC Camera. Secondly, decompose the face image using discrete wavelet transform. Finally, we use the horizontal direction, vertical direction projection method to extract the features of human face. According to the results of the features of human face, we can achieve face recognition. The result show that this method could extract feature points of human face quickly and accurately. This system not only can detect the face feature points with great accuracy, but also more robust than the tradition method to locate facial feature image.

A Study on Face Recognition using Support Vector Machine (SVM을 이용한 얼굴 인식에 관한 연구)

  • Kim, Seung-Jae;Lee, Jung-Jae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.6
    • /
    • pp.183-190
    • /
    • 2016
  • This study proposed a more stable robust recognition algorithm which detects faces reliably even in cases where there are changes in lighting and angle of view, as well it satisfies efficiency in calculation and detection performance. The algorithm proposed detects the face area alone after normalization through pre-processing and obtains a feature vector using (PCA). Also, by applying the feature vector obtained for SVM, face areas can be tested. After the testing, using the feature vector is final face recognition performed. The algorithm proposed in this study could increase the stability and accuracy of recognition rates and as a large amount of calculation was not necessary due to the use of two dimensions, real-time recognition was possible.

Detection Method of Human Face, Facial Components and Rotation Angle Using Color Value and Partial Template (컬러정보와 부분 템플릿을 이용한 얼굴영역, 요소 및 회전각 검출)

  • Lee, Mi-Ae;Park, Ki-Soo
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.465-472
    • /
    • 2003
  • For an effective pre-treatment process of a face input image, it is necessary to detect each of face components, calculate the face area, and estimate the rotary angle of the face. A proposed method of this study can estimate an robust result under such renditions as some different levels of illumination, variable fate sizes, fate rotation angels, and background color similar to skin color of the face. The first step of the proposed method detects the estimated face area that can be calculated by both adapted skin color Information of the band-wide HSV color coordinate converted from RGB coordinate, and skin color Information using histogram. Using the results of the former processes, we can detect a lip area within an estimated face area. After estimating a rotary angle slope of the lip area along the X axis, the method determines the face shape based on face information. After detecting eyes in face area by matching a partial template which is made with both eyes, we can estimate Y axis rotary angle by calculating the eye´s locations in three dimensional space in the reference of the face area. As a result of the experiment on various face images, the effectuality of proposed algorithm was verified.

Automatic 3D Facial Movement Detection from Mirror-reflected Multi-Image for Facial Expression Modeling (거울 투영 이미지를 이용한 3D 얼굴 표정 변화 자동 검출 및 모델링)

  • Kyung, Kyu-Min;Park, Mignon;Hyun, Chang-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.113-115
    • /
    • 2005
  • This thesis presents a method for 3D modeling of facial expression from frontal and mirror-reflected multi-image. Since the proposed system uses only one camera, two mirrors, and simple mirror's property, it is robust, accurate and inexpensive. In addition, we can avoid the problem of synchronization between data among different cameras. Mirrors located near one's cheeks can reflect the side views of markers on one's face. To optimize our system, we must select feature points of face intimately associated with human's emotions. Therefore we refer to the FDP (Facial Definition Parameters) and FAP (Facial Animation Parameters) defined by MPEG-4 SNHC (Synlhetic/Natural Hybrid Coding). We put colorful dot markers on selected feature points of face to detect movement of facial deformation when subject makes variety expressions. Before computing the 3D coordinates of extracted facial feature points, we properly grouped these points according to relative part. This makes our matching process automatically. We experiment on about twenty koreans the subject of our experiment in their late twenties and early thirties. Finally, we verify the performance of the proposed method tv simulating an animation of 3D facial expression.

  • PDF