• Title/Summary/Keyword: facial region detection

Search Result 116, Processing Time 0.027 seconds

A Study on Enhancing the Performance of Detecting Lip Feature Points for Facial Expression Recognition Based on AAM (AAM 기반 얼굴 표정 인식을 위한 입술 특징점 검출 성능 향상 연구)

  • Han, Eun-Jung;Kang, Byung-Jun;Park, Kang-Ryoung
    • The KIPS Transactions:PartB
    • /
    • v.16B no.4
    • /
    • pp.299-308
    • /
    • 2009
  • AAM(Active Appearance Model) is an algorithm to extract face feature points with statistical models of shape and texture information based on PCA(Principal Component Analysis). This method is widely used for face recognition, face modeling and expression recognition. However, the detection performance of AAM algorithm is sensitive to initial value and the AAM method has the problem that detection error is increased when an input image is quite different from training data. Especially, the algorithm shows high accuracy in case of closed lips but the detection error is increased in case of opened lips and deformed lips according to the facial expression of user. To solve these problems, we propose the improved AAM algorithm using lip feature points which is extracted based on a new lip detection algorithm. In this paper, we select a searching region based on the face feature points which are detected by AAM algorithm. And lip corner points are extracted by using Canny edge detection and histogram projection method in the selected searching region. Then, lip region is accurately detected by combining color and edge information of lip in the searching region which is adjusted based on the position of the detected lip corners. Based on that, the accuracy and processing speed of lip detection are improved. Experimental results showed that the RMS(Root Mean Square) error of the proposed method was reduced as much as 4.21 pixels compared to that only using AAM algorithm.

Improving the Processing Speed and Robustness of Face Detection for a Psychological Robot Application (심리로봇적용을 위한 얼굴 영역 처리 속도 향상 및 강인한 얼굴 검출 방법)

  • Ryu, Jeong Tak;Yang, Jeen Mo;Choi, Young Sook;Park, Se Hyun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.20 no.2
    • /
    • pp.57-63
    • /
    • 2015
  • Compared to other emotion recognition technology, facial expression recognition technology has the merit of non-contact, non-enforceable and convenience. In order to apply to a psychological robot, vision technology must be able to quickly and accurately extract the face region in the previous step of facial expression recognition. In this paper, we remove the background from any image using the YCbCr skin color technology, and use Haar-like Feature technology for robust face detection. We got the result of improved processing speed and robust face detection by removing the background from the input image.

Exploring the Feasibility of Neural Networks for Criminal Propensity Detection through Facial Features Analysis

  • Amal Alshahrani;Sumayyah Albarakati;Reyouf Wasil;Hanan Farouquee;Maryam Alobthani;Someah Al-Qarni
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.11-20
    • /
    • 2024
  • While artificial neural networks are adept at identifying patterns, they can struggle to distinguish between actual correlations and false associations between extracted facial features and criminal behavior within the training data. These associations may not indicate causal connections. Socioeconomic factors, ethnicity, or even chance occurrences in the data can influence both facial features and criminal activity. Consequently, the artificial neural network might identify linked features without understanding the underlying cause. This raises concerns about incorrect linkages and potential misclassification of individuals based on features unrelated to criminal tendencies. To address this challenge, we propose a novel region-based training approach for artificial neural networks focused on criminal propensity detection. Instead of solely relying on overall facial recognition, the network would systematically analyze each facial feature in isolation. This fine-grained approach would enable the network to identify which specific features hold the strongest correlations with criminal activity within the training data. By focusing on these key features, the network can be optimized for more accurate and reliable criminal propensity prediction. This study examines the effectiveness of various algorithms for criminal propensity classification. We evaluate YOLO versions YOLOv5 and YOLOv8 alongside VGG-16. Our findings indicate that YOLO achieved the highest accuracy 0.93 in classifying criminal and non-criminal facial features. While these results are promising, we acknowledge the need for further research on bias and misclassification in criminal justice applications

A Hybrid Approach of Efficient Facial Feature Detection and Tracking for Real-time Face Direction Estimation (실시간 얼굴 방향성 추정을 위한 효율적인 얼굴 특성 검출과 추적의 결합방법)

  • Kim, Woonggi;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.117-124
    • /
    • 2013
  • In this paper, we present a new method which efficiently estimates a face direction from a sequences of input video images in real time fashion. For this work, the proposed method performs detecting the facial region and major facial features such as both eyes, nose and mouth by using the Haar-like feature, which is relatively not sensitive against light variation, from the detected facial area. Then, it becomes able to track the feature points from every frame using optical flow in real time fashion, and determine the direction of the face based on the feature points tracked. Further, in order to prevent the erroneously recognizing the false positions of the facial features when if the coordinates of the features are lost during the tracking by using optical flow, the proposed method determines the validity of locations of the facial features using the template matching of detected facial features in real time. Depending on the correlation rate of re-considering the detection of the features by the template matching, the face direction estimation process is divided into detecting the facial features again or tracking features while determining the direction of the face. The template matching initially saves the location information of 4 facial features such as the left and right eye, the end of nose and mouse in facial feature detection phase and reevaluated these information when the similarity measure between the stored information and the traced facial information by optical flow is exceed a certain level of threshold by detecting the new facial features from the input image. The proposed approach automatically combines the phase of detecting facial features and the phase of tracking features reciprocally and enables to estimate face pose stably in a real-time fashion. From the experiment, we can prove that the proposed method efficiently estimates face direction.

Face Detection Algorithm for Automatic Teller Machine(ATM) (현금 인출기 적용을 위한 얼굴인식 알고리즘)

  • 이혁범;유지상
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.6B
    • /
    • pp.1041-1049
    • /
    • 2000
  • A face recognition algorithm for the user identification procedure of automatic teller machine(ATM), as an application of the still image processing techniques is proposed in this paper. In the proposed algorithm, face recognition techniques, especially, face region detection, eye and mouth detection schemes, which can distinguish abnormal faces from normal faces, are proposed. We define normal face, which is acceptable, as a face without sunglasses or a mask, and abnormal face, which is non-acceptable, as that wearing both, or either one of them. The proposed face recognition algorithm is composed of three stages: the face region detection stage, the preprocessing stage for facial feature detection and the eye and mouth detection stage. Experimental results show that the proposed algorithm can distinguish abnormal faces from normal faces accurately from restrictive sample images.

  • PDF

Face Detection using AdaBoost and ASM (AdaBoost와 ASM을 활용한 얼굴 검출)

  • Lee, Yong-Hwan;Kim, Heung-Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.17 no.4
    • /
    • pp.105-108
    • /
    • 2018
  • Face Detection is an essential first step of the face recognition, and this is significant effects on face feature extraction and the effects of face recognition. Face detection has extensive research value and significance. In this paper, we present and analysis the principle, merits and demerits of the classic AdaBoost face detection and ASM algorithm based on point distribution model, which ASM solves the problems of face detection based on AdaBoost. First, the implemented scheme uses AdaBoost algorithm to detect original face from input images or video stream. Then, it uses ASM algorithm converges, which fit face region detected by AdaBoost to detect faces more accurately. Finally, it cuts out the specified size of the facial region on the basis of the positioning coordinates of eyes. The experimental result shows that the method can detect face rapidly and precisely, with a strong robustness.

Face Region Detection and Verification using both WPA and Spatially Restricted Statistic (공간 제약 특성과 WPA를 이용한 얼굴 영역 검출 및 검증 방법)

  • Song, Ho-Keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.3
    • /
    • pp.542-548
    • /
    • 2006
  • In this paper, we propose a face region detection/verification method using wavelet packet analysis and structural statistic for frontal human color image. The method extracts skin color lesions from input images, first. and then applies spatial restrictive conditions to the region, and determines whether the region is face candidate region or not. In second step, we find eye region in the face candidate region using structural statistic for standard korean faces. And in last step, the face region is verified via wavelet packet analysis if the face torture were satisfied to normal texture conditions.

Face Detection for Cast Searching in Video (비디오 등장인물 검색을 위한 얼굴검출)

  • Paik Seung-ho;Kim Jun-hwan;Yoo Ji-sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.10C
    • /
    • pp.983-991
    • /
    • 2005
  • Human faces are commonly found in a video such as a drama and provide useful information for video content analysis. Therefore, face detection plays an important role in applications such as face recognition, and face image database management. In this paper, we propose a face detection algorithm based on pre-processing of scene change detection for indexing and cast searching in video. The proposed algorithm consists of three stages: scene change detection stage, face region detection stage, and eyes and mouth detection stage. Experimental results show that the proposed algorithm can detect faces successfully over a wide range of facial variations in scale, rotation, pose, and position, and the performance is improved by $24\%$with profile images comparing with conventional methods using color components.

Face Detection in Color Image

  • Chunlin Jino;Park, Yeongmi;Euiyoung Cha
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10b
    • /
    • pp.559-561
    • /
    • 2003
  • Human face detection plays an important role in variable applications. A face detection method based on skin-color information and facial feature in color images is proposed in this paper. First, the RGB color space is transformed to YCbCr space and only the skin region is extracted with the skin color information. And then, the candidate where face is likely to exist is selected after labeling processing. Finally, we detect facial features in face candidate. The experimental results show that the method proposed here is effective.

  • PDF

Face Region Detection Algorithm using Euclidean Distance of Color-Image (칼라 영상에서 유클리디안 거리를 이용한 얼굴영역 검출 알고리즘)

  • Jung, Haing-sup;Lee, Joo-shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.79-86
    • /
    • 2009
  • This study proposed a method of detecting the facial area by calculating Euclidian distances among skin color elements and extracting the characteristics of the face. The proposed algorithm is composed of light calibration and face detection. The light calibration process performs calibration for the change of light. The face detection process extracts the area of skin color by calculating Euclidian distances to the input images using as characteristic vectors color and chroma in 20 skin color sample images. From the extracted facial area candidate, the eyes were detected in space C of color model CMY, and the mouth was detected in space Q of color model YIQ. From the extracted facial area candidate, the facial area was detected based on the knowledge of an ordinary face. When an experiment was conducted with 40 color images of face as input images, the method showed a face detection rate of 100%.

  • PDF