• Title/Summary/Keyword: Facial Detection

Search Result 378, Processing Time 0.029 seconds

Invariant Range Image Multi-Pose Face Recognition Using Fuzzy c-Means

  • Phokharatkul, Pisit;Pansang, Seri
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1244-1248
    • /
    • 2005
  • In this paper, we propose fuzzy c-means (FCM) to solve recognition errors in invariant range image, multi-pose face recognition. Scale, center and pose error problems were solved using geometric transformation. Range image face data was digitized into range image data by using the laser range finder that does not depend on the ambient light source. Then, the digitized range image face data is used as a model to generate multi-pose data. Each pose data size was reduced by linear reduction into the database. The reduced range image face data was transformed to the gradient face model for facial feature image extraction and also for matching using the fuzzy membership adjusted by fuzzy c-means. The proposed method was tested using facial range images from 40 people with normal facial expressions. The output of the detection and recognition system has to be accurate to about 93 percent. Simultaneously, the system must be robust enough to overcome typical image-acquisition problems such as noise, vertical rotated face and range resolution.

  • PDF

A Study on Vector-based Automatic Caricature Generation (벡터기반의 캐리커처 자동생성에 관한 연구)

  • Park, Yeon-Chool;Oh, Hae-Seok
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.647-656
    • /
    • 2003
  • This paper proposes the system to generate caricature (character's face) resembling human face using extracted facial features automatically. Since this system is vector-based, the generated character's face has no size limit and constraint. So it is available to transform the shape freely and to apply various facial expressions to 2D face. Moreover, owing to the vector file's advantage, it can be used in mobile environment as small file site.

Real-Time Face Avatar Creation and Warping Algorithm Using Local Mean Method and Facial Feature Point Detection

  • Lee, Eung-Joo;Wei, Li
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.777-786
    • /
    • 2008
  • Human face avatar is important information in nowadays, such as describing real people in virtual world. In this paper, we have presented a face avatar creation and warping algorithm by using face feature analysis method, in order to detect face feature, we utilized local mean method based on facial feature appearance and face geometric information. Then detect facial candidates by using it's character in $YC_bC_r$ color space. Meanwhile, we also defined the rules which are based on face geometric information to limit searching range. For analyzing face feature, we used face feature points to describe their feature, and analyzed geometry relationship of these feature points to create the face avatar. Then we have carried out simulation on PC and embed mobile device such as PDA and mobile phone to evaluate efficiency of the proposed algorithm. From the simulation results, we can confirm that our proposed algorithm will have an outstanding performance and it's execution speed can also be acceptable.

  • PDF

Facial Image Segmentation using Wavelet Transform (웨이브렛 변환을 적용한 얼굴영상분할)

  • 김장원;박현숙;김창석
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.37 no.3
    • /
    • pp.45-52
    • /
    • 2000
  • In this study, we propose the image segmentation algorithm for facial region segmentation. The proposed algorithm separates the mean image of low frequency band from the differential image of high frequency band in order to make a boundary using HWT, and then we reduce the isolation pixels, projection pixels, and overlapped boundary pixels from the low frequency band. Also the boundaries are detected and simplified by the proposed boundary detection algorithm, which are cleared on the thinning process of 1 pixel unit. After extracting facial image boundary by using the proposed algorithm, we make the mask and segment facial image through matching original image. In the result of facial region segmentation experiment by using the proposed algorithm, the successive facial segmentation have 95.88% segmentation value.

  • PDF

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

Robust 3D Facial Landmark Detection Using Angular Partitioned Spin Images (각 분할 스핀 영상을 사용한 3차원 얼굴 특징점 검출 방법)

  • Kim, Dong-Hyun;Choi, Kang-Sun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.199-207
    • /
    • 2013
  • Spin images representing efficiently surface features of 3D mesh models have been used to detect facial landmark points. However, at a certain point, different normal direction can lead to quite different spin images. Moreover, since 3D points are projected to the 2D (${\alpha}-{\beta}$) space during spin image generation, surface features cannot be described clearly. In this paper, we present a method to detect 3D facial landmark using improved spin images by partitioning the search area with respect to angle. By generating sub-spin images for angular partitioned 3D spaces, more unique features describing corresponding surfaces can be obtained, and improve the performance of landmark detection. In order to generate spin images robust to inaccurate surface normal direction, we utilize on averaging surface normal with its neighboring normal vectors. The experimental results show that the proposed method increases the accuracy in landmark detection by about 34% over a conventional method.

Gaze Detection by Computing Facial and Eye Movement (얼굴 및 눈동자 움직임에 의한 시선 위치 추적)

  • 박강령
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.79-88
    • /
    • 2004
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Gaze detection systems have numerous fields of application. They are applicable to the man-machine interface for helping the handicapped to use computers and the view control in three dimensional simulation programs. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.8 cm of RMS error.

Development of a Deep Learning-Based Automated Analysis System for Facial Vitiligo Treatment Evaluation (안면 백반증 치료 평가를 위한 딥러닝 기반 자동화 분석 시스템 개발)

  • Sena Lee;Yeon-Woo Heo;Solam Lee;Sung Bin Park
    • Journal of Biomedical Engineering Research
    • /
    • v.45 no.2
    • /
    • pp.95-100
    • /
    • 2024
  • Vitiligo is a condition characterized by the destruction or dysfunction of melanin-producing cells in the skin, resulting in a loss of skin pigmentation. Facial vitiligo, specifically affecting the face, significantly impacts patients' appearance, thereby diminishing their quality of life. Evaluating the efficacy of facial vitiligo treatment typically relies on subjective assessments, such as the Facial Vitiligo Area Scoring Index (F-VASI), which can be time-consuming and subjective due to its reliance on clinical observations like lesion shape and distribution. Various machine learning and deep learning methods have been proposed for segmenting vitiligo areas in facial images, showing promising results. However, these methods often struggle to accurately segment vitiligo lesions irregularly distributed across the face. Therefore, our study introduces a framework aimed at improving the segmentation of vitiligo lesions on the face and providing an evaluation of vitiligo lesions. Our framework for facial vitiligo segmentation and lesion evaluation consists of three main steps. Firstly, we perform face detection to minimize background areas and identify the face area of interest using high-quality ultraviolet photographs. Secondly, we extract facial area masks and vitiligo lesion masks using a semantic segmentation network-based approach with the generated dataset. Thirdly, we automatically calculate the vitiligo area relative to the facial area. We evaluated the performance of facial and vitiligo lesion segmentation using an independent test dataset that was not included in the training and validation, showing excellent results. The framework proposed in this study can serve as a useful tool for evaluating the diagnosis and treatment efficacy of vitiligo.

Real Time Face Detection with TS Algorithm in Mobile Display (모바일 디스플레이에서 TS 알고리즘을 이용한 실시간 얼굴영역 검출)

  • Lee, Yong-Hwan;Kim, Young-Seop;Rhee, Sang-Bum;Kang, Jung-Won;Park, Jin-Yang
    • Journal of the Semiconductor & Display Technology
    • /
    • v.4 no.1 s.10
    • /
    • pp.61-64
    • /
    • 2005
  • This study presents a new algorithm to detect the facial feature in a color image entered from the mobile device with complex backgrounds and undefined distance between camera's location and the face. Since skin color model with Hough transformation spent approximately 90$\%$ of running time to extract the fitting ellipse for detection of the facial feature, we have changed the approach to the simple geometric vector operation, called a TS(Triangle-Square) transformation. As the experimental results, this gives benefit of reduced run time. We have similar ratio of face detection to other methods with fast speed enough to be used on real-time identification system in mobile environments.

  • PDF

Detection of eye using optimal edge technique and intensity information (눈 영역에 적합한 에지 추출과 밝기값 정보를 이용한 눈 검출)

  • Mun, Won-Ho;Choi, Yeon-Seok;Kim, Cheol-Ki;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.196-199
    • /
    • 2010
  • The human eyes are important facial landmarks for image normalization due to their relatively constant interocular distance. This paper introduces a novel approach for the eye detection task using optimal segmentation method for eye representation. The method consists of three steps: (1)edge extraction method that can be used to accurately extract eye region from the gray-scale face image, (2)extraction of eye region using labeling method, (3)eye localization based on intensity information. Experimental results show that a correct eye detection rate of 98.9% can be achieved on 2408 FERET images with variations in lighting condition and facial expressions.

  • PDF