• Title/Summary/Keyword: real-time face detection

Search Result 224, Processing Time 0.846 seconds

Effective real-time identification using Bayesian statistical methods gaze Network (베이지안 통계적 방안 네트워크를 이용한 효과적인 실시간 시선 식별)

  • Kim, Sung-Hong;Seok, Gyeong-Hyu
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.3
    • /
    • pp.331-338
    • /
    • 2016
  • In this paper, we propose a GRNN(: Generalized Regression Neural Network) algorithms for new eyes and face recognition identification system to solve the points that need corrective action in accordance with the existing problems of facial movements gaze upon it difficult to identify the user and. Using a Kalman filter structural information elements of a face feature to determine the authenticity of the face was estimated future location using the location information of the current head and the treatment time is relatively fast horizontal and vertical elements of the face using a histogram analysis the detected. And the light obtained by configuring the infrared illuminator pupil effects in real-time detection of the pupil, the pupil tracking was - to extract the text print vector.

Face Detection Algorithm for Driver's Gesture Recognition (운전자 제스처 인식을 위한 얼굴 검출 알고리즘)

  • Han, Cheol-Hoon;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2008.04a
    • /
    • pp.7-10
    • /
    • 2008
  • 자동차의 수가 점점 증가함에 따라 교통사고도 그 만큼 증가하고 있다. 교통사고의 주요 원인 중 하나가 졸음운전이나 부주의한 운전에 의한 것이다. 따라서 Real-Time으로 운전자의 제스처를 인식하여 졸음운전이나 부주의에 의한 사고를 사전에 예방하여 보다 안전한 운전을 돕는 서비스가 필요시 되고 있다. 본 논문에서는 운전자의 제스처 인식에 전처리 과정으로 운전자의 상반신에 대한 영상데이터에서 Adaboost를 이용하여 복잡한 배경과 다양한 환경에서 강인하게 얼굴 영역을 찾는 알고리즘을 소개한다.

  • PDF

Face Region Tracking Improvement and Hardware Implementation for AF(Auto Focusing) Using Face to ROI (얼굴을 관심 영역으로 사용하는 자동 초점을 위한 얼굴 영역 추적 향상 방법 및 하드웨어 구현)

  • Jeong, Hyo-Won;Ha, Joo-Young;Han, Hag-Yong;Yang, Hoon-Gee;Kang, Bong-Soon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.1
    • /
    • pp.89-96
    • /
    • 2010
  • In this paper, we proposed a method about improving face tracking efficiency of face detection for AF system using the faces to the ROI. The conventional face detection system detecting faces based skin color uses the ratio of skin pixels of the present frame to detected face regions of the past frame to track the faces. The tracking method is superior in the stability of the regions but it is inferior in the face tracking efficiency. We proposed a face tracking method using the area of the overlapping region in the detected face regions of the past frame and the present frame to improve the tracking efficiency. The proposed face tracking efficiency demonstration was performed by making a film of face detection with face tracking in real-time and using the moving traces of the detected faces.

Face and Facial Feature Detection under Pose Variation of User Face for Human-Robot Interaction (인간-로봇 상호작용을 위한 자세가 변하는 사용자 얼굴검출 및 얼굴요소 위치추정)

  • Park Sung-Kee;Park Mignon;Lee Taigun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.1
    • /
    • pp.50-57
    • /
    • 2005
  • We present a simple and effective method of face and facial feature detection under pose variation of user face in complex background for the human-robot interaction. Our approach is a flexible method that can be performed in both color and gray facial image and is also feasible for detecting facial features in quasi real-time. Based on the characteristics of the intensity of neighborhood area of facial features, new directional template for facial feature is defined. From applying this template to input facial image, novel edge-like blob map (EBM) with multiple intensity strengths is constructed. Regardless of color information of input image, using this map and conditions for facial characteristics, we show that the locations of face and its features - i.e., two eyes and a mouth-can be successfully estimated. Without the information of facial area boundary, final candidate face region is determined by both obtained locations of facial features and weighted correlation values with standard facial templates. Experimental results from many color images and well-known gray level face database images authorize the usefulness of proposed algorithm.

Facial Expression Recognition using Face Alignment and AdaBoost (얼굴정렬과 AdaBoost를 이용한 얼굴 표정 인식)

  • Jeong, Kyungjoong;Choi, Jaesik;Jang, Gil-Jin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.11
    • /
    • pp.193-201
    • /
    • 2014
  • This paper suggests a facial expression recognition system using face detection, face alignment, facial unit extraction, and training and testing algorithms based on AdaBoost classifiers. First, we find face region by a face detector. From the results, face alignment algorithm extracts feature points. The facial units are from a subset of action units generated by combining the obtained feature points. The facial units are generally more effective for smaller-sized databases, and are able to represent the facial expressions more efficiently and reduce the computation time, and hence can be applied to real-time scenarios. Experimental results in real scenarios showed that the proposed system has an excellent performance over 90% recognition rates.

Face Detection Using Shapes and Colors in Various Backgrounds

  • Lee, Chang-Hyun;Lee, Hyun-Ji;Lee, Seung-Hyun;Oh, Joon-Taek;Park, Seung-Bo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.7
    • /
    • pp.19-27
    • /
    • 2021
  • In this paper, we propose a method for detecting characters in images and detecting facial regions, which consists of two tasks. First, we separate two different characters to detect the face position of the characters in the frame. For fast detection, we use You Only Look Once (YOLO), which finds faces in the image in real time, to extract the location of the face and mark them as object detection boxes. Second, we present three image processing methods to detect accurate face area based on object detection boxes. Each method uses HSV values extracted from the region estimated by the detection figure to detect the face region of the characters, and changes the size and shape of the detection figure to compare the accuracy of each method. Each face detection method is compared and analyzed with comparative data and image processing data for reliability verification. As a result, we achieved the highest accuracy of 87% when using the split rectangular method among circular, rectangular, and split rectangular methods.

Analysis of Understanding Using Deep Learning Facial Expression Recognition for Real Time Online Lectures (딥러닝 표정 인식을 활용한 실시간 온라인 강의 이해도 분석)

  • Lee, Jaayeon;Jeong, Sohyun;Shin, You Won;Lee, Eunhye;Ha, Yubin;Choi, Jang-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.12
    • /
    • pp.1464-1475
    • /
    • 2020
  • Due to the spread of COVID-19, the online lecture has become more prevalent. However, it was found that a lot of students and professors are experiencing lack of communication. This study is therefore designed to improve interactive communication between professors and students in real-time online lectures. To do so, we explore deep learning approaches for automatic recognition of students' facial expressions and classification of their understanding into 3 classes (Understand / Neutral / Not Understand). We use 'BlazeFace' model for face detection and 'ResNet-GRU' model for facial expression recognition (FER). We name this entire process 'Degree of Understanding (DoU)' algorithm. DoU algorithm can analyze a multitude of students collectively and present the result in visualized statistics. To our knowledge, this study has great significance in that this is the first study offers the statistics of understanding in lectures using FER. As a result, the algorithm achieved rapid speed of 0.098sec/frame with high accuracy of 94.3% in CPU environment, demonstrating the potential to be applied to real-time online lectures. DoU Algorithm can be extended to various fields where facial expressions play important roles in communications such as interactions with hearing impaired people.

Facial Shape Recognition Using Self Organized Feature Map(SOFM)

  • Kim, Seung-Jae;Lee, Jung-Jae
    • International journal of advanced smart convergence
    • /
    • v.8 no.4
    • /
    • pp.104-112
    • /
    • 2019
  • This study proposed a robust detection algorithm. It detects face more stably with respect to changes in light and rotation forthe identification of a face shape. The proposed algorithm uses face shape asinput information in a single camera environment and divides only face area through preprocessing process. However, it is not easy to accurately recognize the face area that is sensitive to lighting changes and has a large degree of freedom, and the error range is large. In this paper, we separated the background and face area using the brightness difference of the two images to increase the recognition rate. The brightness difference between the two images means the difference between the images taken under the bright light and the images taken under the dark light. After separating only the face region, the face shape is recognized by using the self-organization feature map (SOFM) algorithm. SOFM first selects the first top neuron through the learning process. Second, the highest neuron is renewed by competing again between the highest neuron and neighboring neurons through the competition process. Third, the final top neuron is selected by repeating the learning process and the competition process. In addition, the competition will go through a three-step learning process to ensure that the top neurons are updated well among neurons. By using these SOFM neural network algorithms, we intend to implement a stable and robust real-time face shape recognition system in face shape recognition.

Face Detection Algorithm for Video Conference Camera Control (화상회의 카메라 제어를 위한 안면 검출 알고리듬)

  • 온승엽;박재현;박규식;이준희
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.218-221
    • /
    • 2000
  • In this paper, we propose a new algorithm to detect human faces for controling a camera used in video conference. We model the distribution of skin color and set up the standard skin color in YIQ color space. An input video frame image is segmented into skin and non-skin segments by comparing the standard skin color and each pixels in the input video frame. Then, shape filler is applied to select face segments from skin segments. Our algorithm detects human faces in real time to control a camera to capture a human face with a proper size and position.

  • PDF

A Study on Detection and Recognition of Facial Area Using Linear Discriminant Analysis

  • Kim, Seung-Jae
    • International journal of advanced smart convergence
    • /
    • v.7 no.4
    • /
    • pp.40-49
    • /
    • 2018
  • We propose a more stable robust recognition algorithm which detects faces reliably even in cases where there are changes in lighting and angle of view, as well it satisfies efficiency in calculation and detection performance. We propose detects the face area alone after normalization through pre-processing and obtains a feature vector using (PCA). The feature vector is applied to LDA and using Euclidean distance of intra-class variance and inter class variance in the 2nd dimension, the final analysis and matching is performed. Experimental results show that the proposed method has a wider distribution when the input image is rotated $45^{\circ}$ left / right. We can improve the recognition rate by applying this feature value to a single algorithm and complex algorithm, and it is possible to recognize in real time because it does not require much calculation amount due to dimensional reduction.