• Title/Summary/Keyword: Facial Detection

Search Result 378, Processing Time 0.024 seconds

Face Detection System Based on Candidate Extraction through Segmentation of Skin Area and Partial Face Classifier (피부색 영역의 분할을 통한 후보 검출과 부분 얼굴 분류기에 기반을 둔 얼굴 검출 시스템)

  • Kim, Sung-Hoon;Lee, Hyon-Soo
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.2
    • /
    • pp.11-20
    • /
    • 2010
  • In this paper we propose a face detection system which consists of a method of face candidate extraction using skin color and a method of face verification using the feature of facial structure. Firstly, the proposed extraction method of face candidate uses the image segmentation and merging algorithm in the regions of skin color and the neighboring regions of skin color. These two algorithms make it possible to select the face candidates from the variety of faces in the image with complicated backgrounds. Secondly, by using the partial face classifier, the proposed face validation method verifies the feature of face structure and then classifies face and non-face. This classifier uses face images only in the learning process and does not consider non-face images in order to use less number of training images. In the experimental, the proposed method of face candidate extraction can find more 9.55% faces on average as face candidates than other methods. Also in the experiment of face and non-face classification, the proposed face validation method obtains the face classification rate on the average 4.97% higher than other face/non-face classifiers when the non-face classification rate is about 99%.

Performance Comparison of Skin Color Detection Algorithms by the Changes of Backgrounds (배경의 변화에 따른 피부색상 검출 알고리즘의 성능 비교)

  • Jang, Seok-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.3
    • /
    • pp.27-35
    • /
    • 2010
  • Accurately extracting skin color regions is very important in various areas such as face recognition and tracking, facial expression recognition, adult image identification, health-care, and so forth. In this paper, we evaluate the performances of several skin color detection algorithms in indoor environments by changing the distance between the camera and the object as well as the background colors of the object. The distance is from 60cm to 120cm and the background colors are white, black, orange, pink, and yellow, respectively. The algorithms that we use for the performance evaluation are Peer algorithm, NNYUV, NNHSV, LutYUV, and Kimset algorithm. The experimental results show that NNHSV, NNYUV and LutYUV algorithm are stable, but the other algorithms are somewhat sensitive to the changes of backgrounds. As a result, we expect that the comparative experimental results of this paper will be used very effectively when developing a new skin color extraction algorithm which are very robust to dynamic real environments.

Face Tracking Using Face Feature and Color Information (색상과 얼굴 특징 정보를 이용한 얼굴 추적)

  • Lee, Kyong-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.11
    • /
    • pp.167-174
    • /
    • 2013
  • TIn this paper, we find the face in color images and the ability to track the face was implemented. Face tracking is the work to find face regions in the image using the functions of the computer system and this function is a necessary for the robot. But such as extracting skin color in the image face tracking can not be performed. Because face in image varies according to the condition such as light conditions, facial expressions condition. In this paper, we use the skin color pixel extraction function added lighting compensation function and the entire processing system was implemented, include performing finding the features of eyes, nose, mouth are confirmed as face. Lighting compensation function is a adjusted sine function and although the result is not suitable for human vision, the function showed about 4% improvement. Face features are detected by amplifying, reducing the value and make a comparison between the represented image. The eye and nose position, lips are detected. Face tracking efficiency was good.

Autism Spectrum Disorder Detection in Children using the Efficacy of Machine Learning Approaches

  • Tariq Rafiq;Zafar Iqbal;Tahreem Saeed;Yawar Abbas Abid;Muneeb Tariq;Urooj Majeed;Akasha
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.4
    • /
    • pp.179-186
    • /
    • 2023
  • For the future prosperity of any society, the sound growth of children is essential. Autism Spectrum Disorder (ASD) is a neurobehavioral disorder which has an impact on social interaction of autistic child and has an undesirable effect on his learning, speaking, and responding skills. These children have over or under sensitivity issues of touching, smelling, and hearing. Its symptoms usually appear in the child of 4- to 11-year-old but parents did not pay attention to it and could not detect it at early stages. The process to diagnose in recent time is clinical sessions that are very time consuming and expensive. To complement the conventional method, machine learning techniques are being used. In this way, it improves the required time and precision for diagnosis. We have applied TFLite model on image based dataset to predict the autism based on facial features of child. Afterwards, various machine learning techniques were trained that includes Logistic Regression, KNN, Gaussian Naïve Bayes, Random Forest and Multi-Layer Perceptron using Autism Spectrum Quotient (AQ) dataset to improve the accuracy of the ASD detection. On image based dataset, TFLite model shows 80% accuracy and based on AQ dataset, we have achieved 100% accuracy from Logistic Regression and MLP models.

Wavelet Transform-based Face Detection for Real-time Applications (실시간 응용을 위한 웨이블릿 변환 기반의 얼굴 검출)

  • 송해진;고병철;변혜란
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.9
    • /
    • pp.829-842
    • /
    • 2003
  • In this Paper, we propose the new face detection and tracking method based on template matching for real-time applications such as, teleconference, telecommunication, front stage of surveillance system using face recognition, and video-phone applications. Since the main purpose of paper is to track a face regardless of various environments, we use template-based face tracking method. To generate robust face templates, we apply wavelet transform to the average face image and extract three types of wavelet template from transformed low-resolution average face. However template matching is generally sensitive to the change of illumination conditions, we apply Min-max normalization with histogram equalization according to the variation of intensity. Tracking method is also applied to reduce the computation time and predict precise face candidate region. Finally, facial components are also detected and from the relative distance of two eyes, we estimate the size of facial ellipse.

A Study on Fast Iris Detection for Iris Recognition in Mobile Phone (휴대폰에서의 홍채인식을 위한 고속 홍채검출에 관한 연구)

  • Park Hyun-Ae;Park Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.19-29
    • /
    • 2006
  • As the security of personal information is becoming more important in mobile phones, we are starting to apply iris recognition technology to these devices. In conventional iris recognition, magnified iris images are required. For that, it has been necessary to use large magnified zoom & focus lens camera to capture images, but due to the requirement about low size and cost of mobile phones, the zoom & focus lens are difficult to be used. However, with rapid developments and multimedia convergence trends in mobile phones, more and more companies have built mega-pixel cameras into their mobile phones. These devices make it possible to capture a magnified iris image without zoom & focus lens. Although facial images are captured far away from the user using a mega-pixel camera, the captured iris region possesses sufficient pixel information for iris recognition. However, in this case, the eye region should be detected for accurate iris recognition in facial images. So, we propose a new fast iris detection method, which is appropriate for mobile phones based on corneal specular reflection. To detect specular reflection robustly, we propose the theoretical background of estimating the size and brightness of specular reflection based on eye, camera and illuminator models. In addition, we use the successive On/Off scheme of the illuminator to detect the optical/motion blurring and sunlight effect on input image. Experimental results show that total processing time(detecting iris region) is on average 65ms on a Samsung SCH-S2300 (with 150MHz ARM 9 CPU) mobile phone. The rate of correct iris detection is 99% (about indoor images) and 98.5% (about outdoor images).

Eye Detection Using Texture Filters (질감 필터를 이용한 눈 검출)

  • Park, Chan-Woo;Kim, Yong-Min;Park, Ki-Tae;Moon, Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.6
    • /
    • pp.70-78
    • /
    • 2009
  • In this paper, we propose a novel method for eye detection using two texture filters considering textural and structural characteristics of eye regions. The human eyes have two characteristics: 1) the eyes are horizontally long and 2) the pupas are of circular shapes. By considering these two characteristics of human eyes, two texture filters are utilized for the eye detection. One is Gabor filter for detecting eye shapes in horizontal direction. The other is ART descriptor for detecting pupils of circular shape. In order to effectively detect eye regions, the proposed method consists of four steps. The first step is to extract facial regions using AdaBoost method. The second step is to normalize the illumination by considering local information. The third step is to estimate candidate regions for eyes, by merging the results from two texture filters. The final step is to locate exact eye regions by using geometric information of the face. As experimental results, the performance of the proposed method has been improved by 2.9~4.4%, compared to the existing methods.

Efficacy and Usability of Patient Isolation Transport Module for CBRN Disaster : A Manikin Simulation Study (특수재난 대응 환자 격리 이송 장비의 효율성 및 편의성 평가: 마네킹시뮬레이션 연구)

  • Kim, Ki-Hong;Hong, Ki-Jeong;Haam, Seung-Hee;Choi, Jin-Woo
    • Fire Science and Engineering
    • /
    • v.32 no.3
    • /
    • pp.116-122
    • /
    • 2018
  • In Chemical, Biological, Radiological and Nuclear (CBRN) disaster, integrated and optimized equipment package including stretcher, isolation unit, patient monitoring and treatment equipment is essential to achieve proper treatment and prevent secondary contamination. The purpose of this study was to evaluate the efficiency and ease of use of integrated CBRN disaster equipment package for disaster medical response. This study was a randomized crossover study using a manikin simulation for emergency medical technitian (EMT). All participants used the existing devices and prototype of integrated CBRN disaster equipment package alternately. Efficiency was measured by time from vital sign change to detection or treatment application. Ease was use was measured by questionnaires for each patient monitor, stretcher care and isolation unit. 12 EMTs were enrolled. hypoxia-detection time of integrated equipment group was significantly shorter than existing equipment group (4.9 s (3.8-3.9) vs 3.5 s (2.5-3.9), p < 0.05). There was decreasing tendency of ECG change detection and facial mask oxygen supply but no statistical significance was observed. Overall satisfaction of patient monitoring device in integrated equipment group was significantly higher than existing devices (4(3.5-5) vs 3(3-3), p < 0.05). The use of integrated CBRN disaster equipment package shortened the hypoxia detection time and improved usability of vital sign monitor compared to existing devices.

Using POSTIT Eye Gaze Tracking in Real-time (POSTIT정보 이용한 실시간 눈동자 시선 추적)

  • Kim, Mi-Kyung;Choi, Yeon-Seok;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.05a
    • /
    • pp.750-753
    • /
    • 2012
  • A method detecting the position of eyes and tracking a gaze point of eyes in realtime using POSIT is suggested in this paper. This algorithm find out a candidate area of eyes using topological characteristics of eyes and then decides the center of eyes using physical characteristics of eyes. To find the eyes, a nose and a mouth are used for POSIT. The experimental results show that proposed method effectively performed detection of eyes in facial image in FERET databases and gave high performance when used for tracking a gaze point of eyes.

  • PDF

Multi-Scale, Multi-Object and Real-Time Face Detection and Head Pose Estimation Using Deep Neural Networks (다중크기와 다중객체의 실시간 얼굴 검출과 머리 자세 추정을 위한 심층 신경망)

  • Ahn, Byungtae;Choi, Dong-Geol;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.313-321
    • /
    • 2017
  • One of the most frequently performed tasks in human-robot interaction (HRI), intelligent vehicles, and security systems is face related applications such as face recognition, facial expression recognition, driver state monitoring, and gaze estimation. In these applications, accurate head pose estimation is an important issue. However, conventional methods have been lacking in accuracy, robustness or processing speed in practical use. In this paper, we propose a novel method for estimating head pose with a monocular camera. The proposed algorithm is based on a deep neural network for multi-task learning using a small grayscale image. This network jointly detects multi-view faces and estimates head pose in hard environmental conditions such as illumination change and large pose change. The proposed framework quantitatively and qualitatively outperforms the state-of-the-art method with an average head pose mean error of less than $4.5^{\circ}$ in real-time.