• Title/Summary/Keyword: Facial range image

Search Result 37, Processing Time 0.027 seconds

Invariant Range Image Multi-Pose Face Recognition Using Fuzzy c-Means

  • Phokharatkul, Pisit;Pansang, Seri
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1244-1248
    • /
    • 2005
  • In this paper, we propose fuzzy c-means (FCM) to solve recognition errors in invariant range image, multi-pose face recognition. Scale, center and pose error problems were solved using geometric transformation. Range image face data was digitized into range image data by using the laser range finder that does not depend on the ambient light source. Then, the digitized range image face data is used as a model to generate multi-pose data. Each pose data size was reduced by linear reduction into the database. The reduced range image face data was transformed to the gradient face model for facial feature image extraction and also for matching using the fuzzy membership adjusted by fuzzy c-means. The proposed method was tested using facial range images from 40 people with normal facial expressions. The output of the detection and recognition system has to be accurate to about 93 percent. Simultaneously, the system must be robust enough to overcome typical image-acquisition problems such as noise, vertical rotated face and range resolution.

  • PDF

A Factor Analysis for the Success of Commercialization of the Facial Extraction and Recognition Image Information System (얼굴추출 및 인식 영상정보 시스템 상용화 성공요인 분석)

  • Kim, Shin-Pyo;Oh, Se-Dong
    • Journal of Industrial Convergence
    • /
    • v.13 no.2
    • /
    • pp.45-54
    • /
    • 2015
  • This Study aims to analyze the factors for the success of commercialization of the facial extraction and recognition image security information system of the domestic companies in Korea. As the results of the analysis, the internal factors for the success of commercialization of the facial extraction and recognition image security information system of the company were found to include (1) Holding of technology for close range facial recognition, (2) Holding of several facial recognition related patents, (3) Preference for the facial recognition security system over the fingerprint recognition and (4) strong volition of the CEO of the corresponding company. On the other hand, the external environmental factors for the success were found to include (1) Extensiveness of the market, (2) Rapid growth of the global facial recognition market, (3) Increased demand for the image security system, (4) Competition in securing of the engine for facial extraction and recognition and (5) Selection by the government as one of the 100 major strategic products.

  • PDF

Facial Region Extraction in an Infrared Image (적외선 영상에서의 얼굴 영역 자동 추적)

  • Shin, S.W.;Kim, K.S.;Yoon, T.H.;Han, M.H.;Kim, I.Y.
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.57-59
    • /
    • 2005
  • In our study, the automatic tracking algorithm of a human face is proposed by utilizing the thermal properties and 2nd momented geometrical feature of an infrared image. First, the facial candidates are estimated by restricting the certain range of thermal values, and the spurious blobs cleaning algorithm is applied to track the refined facial region in an infrared image.

  • PDF

VALIDITY OF SUPERIMPOSITION RANGE AT 3-DIMENSIONAL FACIAL IMAGES (안면 입체영상 중첩시 중첩 기준 범위 설정에 따른 적합도 차이)

  • Choi, Hak-Hee;Cho, Jin-Hyoung;Park, Hong-Ju;Oh, Hee-Kyun;Choi, Jin-Hugh;Hwang, Hyeon-Shik;Lee, Ki-Heon
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.31 no.2
    • /
    • pp.149-157
    • /
    • 2009
  • Purpose: This study was to evaluate the validity of superimposition range at facial images constructed with 3-dimensional (3D) surface laser scanning system. Materials and methods: For the present study, thirty adults, who had no severe skeletal discrepancy, were selected and scanned twice by a 3D laser scanner (VIVID 910, Minolta, Tokyo, Japan) with 12 markers placed on the face. Then, two 3D facial images (T1-baseline, T2-30 minutes later) were reconstructed respectably and superimposed in several manners with $RapidForm^{TM}2006$ (Inus, Seoul, Korea) software program. The distances between markers at the same place of face were measured in superimposed 3D facial images and measurement were done all the 12 makers respectably. Results: The average linear distances between the markers at the same place in the superimposed image constructed by upper 2/3 of the face was $0.92{\pm}0.23\;mm$, in the superimposed image constructed by upper 1/2 of the face was $0.98{\pm}0.26\;mm$, in the superimposed image constructed by upper 1/3 of the face and nose area was $0.99{\pm}0.24\;mm$, in the superimposed image constructed by upper 1/3 of the face was $1.41{\pm}0.48\;mm$, and in the superimposed image constructed by whole face was $0.83{\pm}0.13\;mm$. There were no statistically significant differences in the liner distances of the makers placed on the area included in superimposition range used for partial registration methods but there were significant differences in the linear distances of the markers placed on the areas not included in superimposition range between whole registration method and partial registration methods used in this study. Conclusion: The results of the present study suggest that the validity of superimposition is decreased as superimposition range is reduced in the superimposition of 3D images constructed with 3D laser scanner for the same subject.

Facial Feature Localization from 3D Face Image using Adjacent Depth Differences (인접 부위의 깊이 차를 이용한 3차원 얼굴 영상의 특징 추출)

  • 김익동;심재창
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.617-624
    • /
    • 2004
  • This paper describes a new facial feature localization method that uses Adjacent Depth Differences(ADD) in 3D facial surface. In general, human recognize the extent of deepness or shallowness of region relatively, in depth, by comparing the neighboring depth information among regions of an object. The larger the depth difference between regions shows, the easier one can recognize each region. Using this principal, facial feature extraction will be easier, more reliable and speedy. 3D range images are used as input images. And ADD are obtained by differencing two range values, which are separated at a distance coordinate, both in horizontal and vertical directions. ADD and input image are analyzed to extract facial features, then localized a nose region, which is the most prominent feature in 3D facial surface, effectively and accurately.

Comparision of Mandible Changes on Three-Dimensional Computed Tomography image After Mandibular Surgery in Facial Asymmetry Patients (안면 비대칭 환자의 하악골 수술 후 하악골 변화에 대한 3차원 CT 영상 비교)

  • Kim, Mi-Ryoung;Chin, Byung-Rho
    • Journal of Yeungnam Medical Science
    • /
    • v.25 no.2
    • /
    • pp.108-116
    • /
    • 2008
  • Background : When surgeons plan mandible ortho surgery for patients with skeletal class III facial asymmetry, they must be consider the exact method of surgery for correction of the facial asymmetry. Three-dimensional (3D) CT imaging is efficient in depicting specific structures in the craniofacial area. It reproduces actual measurements by minimizing errors from patient movement and allows for image magnification. Due to the rapid development of digital image technology and the expansion of treatment range, rapid progress has been made in the study of three-dimensional facial skeleton analysis. The purpose of this study was to conduct 3D CT image comparisons of mandible changes after mandibular surgery in facial asymmetry patients. Materials & methods : This study included 7 patients who underwent 3D CT before and after correction of facial asymmetry in the oral and maxillofacial surgery department of Yeungnam University Hospital between August 2002 and November 2005. Patients included 2 males and 5 females, with ages ranging from 16 years to 30 years (average 21.4 years). Frontal CT images were obtained before and after surgery, and changes in mandible angle and length were measured. Results : When we compared the measurements obtained before and after mandibular surgery in facial asymmetry patients, correction of facial asymmetry was identified on the "after" images. The mean difference between the right and left mandibular angles before mandibular surgery was $7^{\circ}$, whereas after mandibular surgery it was $1.5^{\circ}$. The right and left mandibular length ratios subtracted from 1 was 0.114 before mandibular surgery, while it was 0.036 after mandibular surgery. The differences were analyzed using the nonparametric test and the Wilcoxon signed ranks test (p<0.05). Conclusion: The system that has been developed produces an accurate three-dimensional representation of the skull, upon which individualized surgery of the skull and jaws is easily performed. The system also permits accurate measurement and monitoring of postsurgical changes to the face and jaws through reproducible and noninvasive means.

  • PDF

The Extraction of Face Regions based on Optimal Facial Color and Motion Information in Image Sequences (동영상에서 최적의 얼굴색 정보와 움직임 정보에 기반한 얼굴 영역 추출)

  • Park, Hyung-Chul;Jun, Byung-Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.2
    • /
    • pp.193-200
    • /
    • 2000
  • The extraction of face regions is required for Head Gesture Interface which is a natural user interface. Recently, many researchers are interested in using color information to detect face regions in image sequences. Two most widely used color models, HSI color model and YIQ color model, were selected for this study. Actually H-component of HSI and I-component of YIQ are used in this research. Given the difference in the color component, this study was aimed to compare the performance of face region detection between the two models. First, we search the optimum range of facial color for each color component, examining the detection accuracy of facial color regions for variant threshold range about facial color. And then, we compare the accuracy of the face box for both color models by using optimal facial color and motion information. As a result, a range of $0^{\circ}{\sim}14^{\circ}$ in the H-component and a range of $-22^{\circ}{\sim}-2^{\circ}$ in the I-component appeared to be the most optimum range for extracting face regions. When the optimal facial color range is used, I-component is better than H-component by about 10% in accuracy to extract face regions. While optimal facial color and motion information are both used, I-component is also better by about 3% in accuracy to extract face regions.

  • PDF

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.

Recognition of Facial Expressions Using Muscle-eased Feature Models (근육기반의 특징모델을 이용한 얼굴표정인식에 관한 연구)

  • 김동수;남기환;한준희;박호식;차영석;최현수;배철수;권오홍;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.11a
    • /
    • pp.416-419
    • /
    • 1999
  • We Present a technique for recognizing facial expressions from image sequences. The technique uses muscle-based feature models for tracking facial features. Since the feature models are constructed with a small number of parameters and are deformable in the limited range and directions, each search space for a feature can be limited. The technique estimates muscular contractile degrees for classifying six principal facial express expressions. The contractile vectors are obtained from the deformations of facial muscle models. Similarities are defined between those vectors and representative vectors of principal expressions and are used for determining facial expressions.

  • PDF

Skin Condition Analysis of Facial Image using Smart Device: Based on Acne, Pigmentation, Flush and Blemish

  • Park, Ki-Hong;Kim, Yoon-Ho
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.8 no.2
    • /
    • pp.47-58
    • /
    • 2018
  • In this paper, we propose a method for skin condition analysis using a camera module embedded in a smartphone without a separate skin diagnosis device. The type of skin disease detected in facial image taken by smartphone is acne, pigmentation, blemish and flush. Face features and regions were detected using Haar features, and skin regions were detected using YCbCr and HSV color models. Acne and flush were extracted by setting the range of a component image hue, and pigmentation was calculated by calculating the factor between the minimum and maximum value of the corresponding skin pixel in the component image R. Blemish was detected on the basis of adaptive thresholds in gray scale level images. As a result of the experiment, the proposed skin condition analysis showed that skin diseases of acne, pigmentation, blemish and flush were effectively detected.