• Title/Summary/Keyword: Facial Region

Search Result 519, Processing Time 0.032 seconds

Face Extraction using Genetic Algorithm, Stochastic Variable and Geometrical Model (유전 알고리즘, 통계적 변수, 기하학적 모델에 의한 얼굴 영역 추출)

  • 이상진;홍준표이종실홍승홍
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.891-894
    • /
    • 1998
  • This paper introduces an automatic face region extraction method. This method consists of two part: face recognition and extraction of facial organs which are eye, eyebrow, nose and mouth. In first stage, we use genetic algorithms(GAs) to get face region in complex background. In second stage, we use Geometrical Face Model to textract eye, eyebrow, nose and mouth. In both stage, stochastic component is used to deal with the problems caused by had lighting condition. According to this value, blurring number is determined. Average Computation time is less than 1 sec, and using this method we can extract facial feature efficiently from several images which has different lightning condition.

  • PDF

TREATMENT OF POLYOSTOTIC FIBROUS DYSPLASIA DEVELOPED IN LEFT CRANIOFACIAL BONES:A CASE REPORT (좌측 두개 안면부에 발생한 다골성 섬유성 골 이형성증의 치험례)

  • Kim, Il-Kyu;Lee, Seong-Jun;Ha, Soo-Yong;Chu, Young-Chae
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.12 no.2
    • /
    • pp.95-101
    • /
    • 1990
  • This is a case report of polyostotic fibrous dysplasia developed in the craniofacial region of 21 year old male patient, who had complained the buccolingual expansion of left mandibular body area, malocclusion and facial asymmetry. We could achieve satisfactory results by radical resection of the relatively well defined small lesion of mandible and by cosmetic bone shaving procedure on the widely dispersed and poorly defined lesions of cranium. But the persistent growth and recurrence of the lesions may produce loss of hearing, visual difficulties, facial paralysis and anosmia, and as it is a polyostotic type occured in the craniofacial region of male patient, the possibility of malignant degeneration should not be excluded completely and periodic recall and check up will be necessary.

  • PDF

Automatic Face Identification System Using Adaptive Face Region Detection and Facial Feature Vector Classification

  • Kim, Jung-Hoon;Do, Kyeong-Hoon;Lee, Eung-Joo
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.1252-1255
    • /
    • 2002
  • In this paper, face recognition algorithm, by using skin color information of HSI color coordinate collected from face images, elliptical mask, fratures of face including eyes, nose and mouth, and geometrical feature vectors of face and facial angles, is proposed. The proposed algorithm improved face region extraction efficacy by using HSI information relatively similar to human's visual system along with color tone information about skin colors of face, elliptical mask and intensity information. Moreover, it improved face recognition efficacy with using feature information of eyes, nose and mouth, and Θ1(ACRED), Θ2(AMRED) and Θ 3(ANRED), which are geometrical face angles of face. In the proposed algorithm, it enables exact face reading by using color tone information, elliptical mask, brightness information and structural characteristic angle together, not like using only brightness information in existing algorithm. Moreover, it uses structural related value of characteristics and certain vectors together for the recognition method.

  • PDF

Facial Region Tracking by Utilizing Infra-Red and CCD Color Image (CCD 컬러 영상과 적외선 영상을 이용한 얼굴 영역 검출)

  • Kim K. S.;Lee J. W.;Yoon T. H.;Han M. H.;Shin S. W.;Kim I. Y.;Song C. G.
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.9
    • /
    • pp.577-579
    • /
    • 2005
  • In this study, the automatic tracking algorithm tracing a human face is proposed by using YCbCr color coordinated information and its thermal properties expressed in terms of thermal indexes in an infra-red image. The facial candidates are separately estimated in CbCr color and infra-red domain, respectively with applying the morphological image processing operations and the geometrical shape measures for fitting the elliptical features of a human face. The identification of a true face is accomplished by logical 'AND' operation between the refined image in CbCr color and infra-red domain.

Geometrical Feature-Based Detection of Pure Facial Regions (기하학적 특징에 기반한 순수 얼굴영역 검출기법)

  • 이대호;박영태
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.773-779
    • /
    • 2003
  • Locating exact position of facial components is a key preprocessing for realizing highly accurate and reliable face recognition schemes. In this paper, we propose a simple but powerful method for detecting isolated facial components such as eyebrows, eyes, and a mouth, which are horizontally oriented and have relatively dark gray levels. The method is based on the shape-resolving locally optimum thresholding that may guarantee isolated detection of each component. We show that pure facial regions can be determined by grouping facial features satisfying simple geometric constraints on unique facial structure. In the test for over 1000 images in the AR -face database, pure facial regions were detected correctly for each face image without wearing glasses. Very few errors occurred in the face images wearing glasses with a thick frame because of the occluded eyebrow -pairs. The proposed scheme may be best suited for the later stage of classification using either the mappings or a template matching, because of its capability of handling rotational and translational variations.

Facial Expression Recognition using Face Alignment and AdaBoost (얼굴정렬과 AdaBoost를 이용한 얼굴 표정 인식)

  • Jeong, Kyungjoong;Choi, Jaesik;Jang, Gil-Jin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.11
    • /
    • pp.193-201
    • /
    • 2014
  • This paper suggests a facial expression recognition system using face detection, face alignment, facial unit extraction, and training and testing algorithms based on AdaBoost classifiers. First, we find face region by a face detector. From the results, face alignment algorithm extracts feature points. The facial units are from a subset of action units generated by combining the obtained feature points. The facial units are generally more effective for smaller-sized databases, and are able to represent the facial expressions more efficiently and reduce the computation time, and hence can be applied to real-time scenarios. Experimental results in real scenarios showed that the proposed system has an excellent performance over 90% recognition rates.

Effective Dose Determination From CT Head & Neck Region (두경부(Head & Neck) CT 검사 시 장기의 유효선량 측정)

  • Yun, Jae-Hyeok;Lee, Kwang-Weon;Cho, Young-Ki;Choi, Ji-Won;Lee, Joon-Il
    • Journal of radiological science and technology
    • /
    • v.34 no.2
    • /
    • pp.105-116
    • /
    • 2011
  • In this study, we present the measurements of effective dose from CT of head & neck region. A series of dose measurements in anthropomorphic Rando phantom was conducted using a radio photoluminescent glass rod dosimeter to evaluate effective doses of organs of head and neck region from the patient. The experiments were performed with respect to four anatomic regions of head & neck: optic nerve, pons, cerebellum, and thyroid gland. The head & neck CT protocol was used in the single scan (Brain, 3D Facial, Temporal, Brain Angiography and 3D Cervical Spine) and the multiple scan (Brain+Brain Angiography, Brain+3D Facial, Brain+Temporal, Brain+3D Cervical spine, Brain+3D Facial+Temporal, Brain+3D Cervical Spine+Brain Angiography). The largest effective dose was measured at optic nerve in Brain CT and Brain Angiography. The largest effective dose was delivered to the thyroid grand in 3D faical CT and 3D cervical spine, and to the pons in Temporal CT. In multiple scans, the higher effective dose was measured in the thyroid grand in Brain+3D Facial, Brain+3D Cervical Spine, Brain+3D Facial+Temporal and Brain+3D Cervical Spine+Brain Angiography. In addition, the largest effective dose was delivered to the cerebellum in Brain CT+Brain Angiography CT and higher effective dose was delivered to the pons in Brain+Temporal CT. The results indicate that in multiple scan of Brain+3D Cervical Spine+Brain Angiography, effective dose was 2.52 mSv. This is significantly higher dose than the limitation of annual effective dose of 1 mSv. The effective dose to the optic nerve was 0.31 mSv in Brain CT, which shows a possibility of surpassing the limitation of 1 mSv by furthre examination. Therefore, special efforts should be made in clinical practice to reduce dose to the patients.

Normalized Region Extraction of Facial Features by Using Hue-Based Attention Operator (색상기반 주목연산자를 이용한 정규화된 얼굴요소영역 추출)

  • 정의정;김종화;전준형;최흥문
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.6C
    • /
    • pp.815-823
    • /
    • 2004
  • A hue-based attention operator and a combinational integral projection function(CIPF) are proposed to extract the normalized regions of face and facial features robustly against illumination variation. The face candidate regions are efficiently detected by using skin color filter, and the eyes are located accurately nil robustly against illumination variation by applying the proposed hue- and symmetry-based attention operator to the face candidate regions. And the faces are confirmed by verifying the eyes with the color-based eye variance filter. The proposed CIPF, which combines the weighted hue and intensity, is applied to detect the accurate vertical locations of the eyebrows and the mouth under illumination variations and the existence of mustache. The global face and its local feature regions are exactly located and normalized based on these accurate geometrical information. Experimental results on the AR face database[8] show that the proposed eye detection method yields better detection rate by about 39.3% than the conventional gray GST-based method. As a result, the normalized facial features can be extracted robustly and consistently based on the exact eye location under illumination variations.

Quantitative evaluation of alveolar cortical bone density in adults with different vertical facial types using cone-beam computed tomography

  • Ozdemir, Fulya;Tozlu, Murat;Cakan, Derya Germec
    • The korean journal of orthodontics
    • /
    • v.44 no.1
    • /
    • pp.36-43
    • /
    • 2014
  • Objective: The purpose of this study was to quantitatively evaluate the cortical bone densities of the maxillary and mandibular alveolar processes in adults with different vertical facial types using cone-beam computed tomography (CBCT) images. Methods: CBCT images (n = 142) of adult patients (20-45 years) were classified into hypodivergent, normodivergent, and hyperdivergent groups on the basis of linear and angular S-N/Go-Me measurements. The cortical bone densities (in Hounsfield units) at maxillary and mandibular interdental sites from the distal aspect of the canine to the mesial aspect of the second molar were measured on the images. Results: On the maxillary buccal side, female subjects in the hyperdivergent group showed significantly decreased bone density, while in the posterior region, male subjects in the hyperdivergent group displayed significantly decreased bone density when compared with corresponding subjects in the other groups (p<0.001). Furthermore, the subjects in the hyperdivergent group had significantly lower bone densities on the mandibular buccal side than hypodivergent subjects. The maxillary palatal bone density did not differ significantly among groups, but female subjects showed significantly denser palatal cortical bone. No significant difference in bone density was found between the palatal and buccal sides in the maxillary premolar region. Overall, the palatal cortical bone was denser anteriorly and buccal cortical bone was denser posteriorly. Conclusion: Adults with the hyperdivergent facial type tend to have less-dense buccal cortical bone in the maxillary and mandibular alveolar processes. Clinicians should be aware of the variability of cortical bone densities at mini-implant placement sites.

Sex-, growth pattern-, and growth status-related variability in maxillary and mandibular buccal cortical thickness and density

  • Schneider, Sydney;Gandhi, Vaibhav;Upadhyay, Madhur;Allareddy, Veerasathpurush;Tadinada, Aditya;Yadav, Sumit
    • The korean journal of orthodontics
    • /
    • v.50 no.2
    • /
    • pp.108-119
    • /
    • 2020
  • Objective: The primary objective of this study was to quantitatively analyze the bone parameters (thickness and density) at four different interdental areas from the distal region of the canine to the mesial region of the second molar in the maxilla and the mandible. The secondary aim was to compare and contrast the bone parameters at these specific locations in terms of sex, growth status, and facial type. Methods: This retrospective cone-beam computed tomography (CBCT) study reviewed 290 CBCT images of patients seeking orthodontic treatment. Cortical bone thickness in millimeters (mm) and density in pixel intensity value were measured for the regions (1) between the canine and first premolar, (2) between the first and second premolars, (3) between the second premolar and first molar, and (4) between the first and second molars. At each location, the bone thickness and density were measured at distances of 2, 6, and 10 mm from the alveolar crest. Results: The sex comparison (male vs. female) in cortical bone thickness showed no significant difference (p > 0.001). The bone density in growing subjects was significantly (p < 0.001) lower than that in non-growing subjects for most locations. There was no significant difference (p > 0.001) in bone parameters in relation to facial pattern in the maxilla and mandible for most sites. Conclusions: There was no significant sex-related difference in cortical bone thickness. The buccal cortical bone density was higher in females than in males. Bone parameters were similar for subjects with hyperdivergent, hypodivergent, and normodivergent facial patterns.