• Title/Summary/Keyword: facial size

Search Result 322, Processing Time 0.035 seconds

A Study on Vector-based Automatic Caricature Generation (벡터기반의 캐리커처 자동생성에 관한 연구)

  • Park, Yeon-Chool;Oh, Hae-Seok
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.647-656
    • /
    • 2003
  • This paper proposes the system to generate caricature (character's face) resembling human face using extracted facial features automatically. Since this system is vector-based, the generated character's face has no size limit and constraint. So it is available to transform the shape freely and to apply various facial expressions to 2D face. Moreover, owing to the vector file's advantage, it can be used in mobile environment as small file site.

Full face recognition using the feature extracted gy shape analyzing and the back-propagation algorithm (형태분석에 의한 특징 추출과 BP알고리즘을 이용한 정면 얼굴 인식)

  • 최동선;이주신
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.10
    • /
    • pp.63-71
    • /
    • 1996
  • This paper proposes a method which analyzes facial shape and extracts positions of eyes regardless of the tilt and the size of input iamge. With the extracted feature parameters of facial element by the method, full human faces are recognized by a neural network which BP algorithm is applied on. Input image is changed into binary codes, and then labelled. Area, circumference, and circular degree of the labelled binary image are obtained by using chain code and defined as feature parameters of face image. We first extract two eyes from the similarity and distance of feature parameter of each facial element, and then input face image is corrected by standardizing on two extracted eyes. After a mask is genrated line historgram is applied to finding the feature points of facial elements. Distances and angles between the feature points are used as parameters to recognize full face. To show the validity learning algorithm. We confirmed that the proposed algorithm shows 100% recognition rate on both learned and non-learned data for 20 persons.

  • PDF

Analogical Face Generation based on Feature Points

  • Yoon, Andy Kyung-yong;Park, Ki-cheul;Oh, Duck-kyo;Cho, Hye-young;Jang, Jung-hyuk
    • Journal of Multimedia Information System
    • /
    • v.6 no.1
    • /
    • pp.15-22
    • /
    • 2019
  • There are many ways to perform face recognition. The first step of face recognition is the face detection step. If the face is not found in the first step, the face recognition fails. Face detection research has many difficulties because it can be varied according to face size change, left and right rotation and up and down rotation, side face and front face, facial expression, and light condition. In this study, facial features are extracted and the extracted features are geometrically reconstructed in order to improve face recognition rate in extracted face region. Also, it is aimed to adjust face angle using reconstructed facial feature vector, and to improve recognition rate for each face angle. In the recognition attempt using the result after the geometric reconstruction, both the up and down and the left and right facial angles have improved recognition performance.

The correlation among the oral & facial states and the gummy smile in female college students (일부 여대생의 구강 및 안모상태와 치은노출(Gummy smile)과의 상관성)

  • So, Mi-Hyun
    • Journal of Korean society of Dental Hygiene
    • /
    • v.12 no.2
    • /
    • pp.345-353
    • /
    • 2012
  • Objectives : The author has studied about correlation of gingival exposure upon smiling and oral facial status that reduce facial aesthetic. Methods : The subjects in this study are 91 female vulunteers who were in aged $21.4{\pm}1.89$ in Suwon. Objectives should be normal oral and facial status without the prosthodontic, orthodontic appliance or conqenital missing tooth, and agree to be examined the oral status and impression taking. 1.Measure the length of gingival exposure upon smiling. 2.Measure of the size on central incisor. 3.Measure of Facial. SPSS(SPSS 10.0 for windows, SPSS Inc, Chicago, USA) was utilized for calculating the correlation coefficient between gingival exposure upon smiling and facial status. Regression analysis was calculated in order to predict the R square for gingival exposure upon smiling. Results : 1.Correlation coefficient between the gingival exposure and length of maxillary central incisor was calculated as reversed correlation(r=-.302, p<0.01), and between the gingival exposure and the ratio of the length of central incisor/width of central incisor was revealed as reversed correlation(r=-.250, p<0.05) on smiling. 2.There was correlation between the gingival exposure and the facial height(r=.351, p<0.01), the lower facial height(r=.454, p<0.01) and the upper lip height(r=.274, p<0.01) upon smiling. 3.There was correlation between the gingival exposure and the ratio of the facial height/facial width(r=.358, p<0.05), the ratio of the upper facial height/facial width(r=.214, p<0.05), and the ratio of the lower facial height/facial height(r=.383, p<0.01) upon smiling. 4.The equation of the regression analysis for gingival exposure upon smiling could be estimated as gingival exposure upon smiling=-5.139+.279${\times}$lower facial height-.615${\times}$maxillary central incisal length-.05${\times}$nasolabial angle. Conclusions : Considering these results, it recommended that treatment planning should be designed in consideration of such factors as the length of maxillary central incisor, facial height, upper lip height and lower facial height, in order to promote the easthetic problems of face on smiling.

Orbital Wall Reconstruction by Copying a Template (defect model) from the Facial CT in Blow-out Fracture (얼굴뼈 CT 계측 모형을 이용한 안와벽골절의 재건)

  • Kim, Jae Keun;You, Sun Hye;Hwang, Kun;Hwang, Jin Hee
    • Archives of Craniofacial Surgery
    • /
    • v.10 no.2
    • /
    • pp.71-75
    • /
    • 2009
  • Purpose: Recently, orbital wall fracture is common injuries in the face. Facial CT is essential for the accurate diagnosis and appropriate treatment to reconstruct of the orbital wall. The objective of this study was to report the method for accurate measurement of area and shape of the bony defect in the blow-out fractures using facial CT in prior to surgery. Methods: The authors experienced 46 cases of orbital wall fractures and examined for diplopia, sensory disturbance in the area of distribution of the infraorbital nerve, and enophthalmos in the preoperation and followed 1 months after surgery, from August 2007 to May 2008. Bony defect was predicted by measuring continuous defect size from 3 mm interval facial CT. Copying from the defect model (template), we reconstructed orbital wall with resorbable sheet (Inion $CPS^{(R)}$ Inion Oy, Tampere, Finland). Results: One months after surgery using this method, 26 (100%) of the 26 patients improved in the diplopia and sensory disturbance in the area of distribution of the infraorbital nerve. Also 8 (72.7%) of the 11 patients had enophthalmos took favorable turn. Conclusion: This accurate and time-saving method is practicable for determining the location, shape and size of the bony defect. Using this method, we can reconstruct orbital wall fracture fastly and precisely.

Fully Automatic Facial Recognition Algorithm By Using Gabor Feature Based Face Graph (가버 피쳐기반 얼굴 그래프를 이용한 완전 자동 안면 인식 알고리즘)

  • Kim, Jin-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.2
    • /
    • pp.31-39
    • /
    • 2011
  • The facial recognition algorithms using Gabor wavelet based face graph produce very good performance while they have some weakness such as a large amount of computation and an irregular result depend on initial location. We proposed a fully automatic facial recognition algorithm using a Gabor feature based geometric deformable face graph matching. The initial location and size of a face graph can be selected using Adaboost detection results for speed-up. To find the best face graph with the face model graph by updating the size and location of the graph, the geometric transformable parameters are defined. The best parameters for an optimal face graph are derived using an optimization technique. The simulation results show that the proposed algorithm can produce very good performance with recognition rate 96.7% and recognition speed 0.26 sec for FERET database.

Treatment of Lymphangioma combined with Facial Bone Deformity (안면골 변형을 동반한 림프관종의 치험례)

  • Cha Sang-Myun;Choi Hee-Youn
    • Korean Journal of Head & Neck Oncology
    • /
    • v.7 no.1
    • /
    • pp.24-34
    • /
    • 1991
  • Lymphangioma is a benign, growth of lymphatic tissue that is present at birth or develops in early childhood, which may cause serious alterations in growth and developmemt. The problems with facial lymphangioma is usually releated directly to their size and to the area of the face which is involved. The lesions themselves may range from small, localized blemishes to huge facial masses involving both soft tissue and underlying bone and causing great distortion and asymmetry. The facial bones are seldom involved, but the natutal evolution of an individual lesion often cannot be accurately predicted when the child is first seen. Any changes in the underlying facial bone could be due either to a direct growth of the lesion into the bone, or secondary to pressure of the lesion growing outside the bone itself. A case of cystic lymphangioma extending from the neck to the tongue is reported. A six-year-old female was admitted because of swelling of the tongue. At that time, the tongue reportedly reached the extraoral size of 7x5x2.5cm and a soft, diffuse swelling of left anterior neck was revealed. The removal of cystic mass including left neck dissection and partial glossectomy were undertaken. The another case of lymphangioma is located on mandibular cheek. A twenty nine-year-old male was admitted because of palpable mass of the left mandibular area and fissure of palate. The radical excision of mass with mandibulectomy of body were undertuken. Thus we reported such a rare case and reviewed the lymphangioma.

  • PDF

Clustering of Facial Color Types and Their Favorable Colors on Korean Adult Males (한국 남성의 얼굴 피부색 분류와 유형에 어울리는 색채 연구)

  • Kim, Ku-Ja
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.30 no.2 s.150
    • /
    • pp.316-325
    • /
    • 2006
  • The colors of apparel are getting more important to give the differentiated character on fiber and fabrics. This study was to extract the favorable colors that become to facial color types. Research was carried out to classify the facial colors into several similar facial color groups. With JX-777, 2 points of face: forehead and cheek, were measured and classified into 3 facial color types. Sample size was 418 Korean adult males and other 15 of new males subjects. New chosen 3 subjects who had the classified facial color types, wore silver gown and black hat on his head to minimize the interaction of the clothe color an hair. The 40 standardized color samples were used to extract the favorable colors. 187 respondents answered the degree of becomingness of color samples on 3 facial color types. Data were analyzed by K-means cluster analysis, ANOVA and Duncan multiple range test using SPSS Win. 12. Findings were as follows: 1. 418 subjects who had YR colors were classified into 3 kinds of facial color groups. Type 1 was 4.59YR 5.89/5.12, Type 2 was 5.61 YR 5.41/4.79 and Type 3 was 4.38YR 6.49/4.89 respectively. 2. Favorable colors for Type 1 were 2 colors that belonged to ' a ' group from among colors that were divided into a, b, c group and 18 colors that belonged to ' a ' group from among colors that were divided into a, b group by Duncan post hoc test. 3. Type 2 showed that this type had many unfavorable colors. Unfavorable colors were 16 colors that belonged to ' c ' by Duncan test. 5. Favorable colors for Type 3 were 14 colors that belonged to ' a ' from among colors that were divided into a, b, c and 16 colors that belonged to ' a ' from among colors that were divided into a, b by Duncan test.

Basic Study on the Image Instrument of the Facial-form by the 3D-facial Scanner (얼굴스캐너를 활용한 안면형상 영상진단기의 기초 연구)

  • Kim, Gyeong-Cheol;Lee, Jeong-Won;Kim, Hoon;Shin, Soon-Shik;Lee, Hai-Woong;Lee, Yong-Tae;Chi, Gyoo-Yong;Kim, Jong-Won
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.22 no.2
    • /
    • pp.497-501
    • /
    • 2008
  • 3D facial scanner for an accurate analysis is measured precisely a distance in straight, a distance in curved line, an angle in 3D data, the area of surface. We can easy acquire 3D data by the method of 0.8sec in each scan with easy handling, simple merge to whole face, harmless and fast process. In the HyungSang medicine, the inspection of the facial shape includes the Dam(gall bladder) - Bang Kwang(urinary bladder) body, the Jung${\cdot}$Gi${\cdot}$Shin${\cdot}$Hyul, the six merdian types etc. And we will collect the evidence based date verifing in the HyungSang clinical medicine. As we will analyze the facial whole form and the size${\cdot}$length${\cdot}$angle of the facial part, put the facial form's standardization on a solid foundation.

Gaze Detection by Computing Facial Rotation and Translation (얼굴의 회전 및 이동 분석에 의한 응시 위치 파악)

  • Lee, Jeong-Jun;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.5
    • /
    • pp.535-543
    • /
    • 2002
  • In this paper, we propose a new gaze detection method using 2-D facial images captured by a camera on top of the monitor. We consider only the facial rotation and translation and not the eyes' movements. The proposed method computes the gaze point caused by the facial rotation and the amount of the facial translation respectively, and by combining these two the final gaze point on a monitor screen can be obtained. We detected the gaze point caused by the facial rotation by using a neural network(a multi-layered perceptron) whose inputs are the 2-D geometric changes of the facial features' points and estimated the amount of the facial translation by image processing algorithms in real time. Experimental results show that the gaze point detection accuracy between the computed positions and the real ones is about 2.11 inches in RMS error when the distance between the user and a 19-inch monitor is about 50~70cm. The processing time is about 0.7 second with a Pentium PC(233MHz) and 320${\times}$240 pixel-size images.