• Title/Summary/Keyword: Facial analysis

Search Result 1,118, Processing Time 0.024 seconds

Clinical Analysis of Pediatric Facial Bone Fracture; 10-years Experiences in 201 Cases (소아 안면골 골절의 임상 분석; 10년 동안 201례의 경험)

  • Oh, Min;Kim, Young Soo;Youn, Hyo Hun;Choe, Joon
    • Archives of Plastic Surgery
    • /
    • v.32 no.1
    • /
    • pp.55-59
    • /
    • 2005
  • The proper management of the pediatric facial bone fracture is critical in the facial bone development. This study characterizes the surgically treated patient population suffering from facial bone fractures by the use of current data from a large series consisting of 201 cases. The data was gathered through a retrospective chart review of patients surgically treated for facial bone fractures at the department of plastic and reconstructive surgery, Sanggye Paik hospital, Inje university medical center, collected over 10-years period from January, 1993 to December, 2002. Data regarding patient demographics(age, sex), seasonal distribution, location of fractures, and the causes of injury with admission periods, were collected. In total, there were 201cases of pediatric facial bone fractures. Male patients outnumbered female patients by a 5.48: 1 ratio and were found to engage in a wider range of behaviors that resulted in facial bone fractures. Physical violence was the leading cause of pediatric facial bone fractures(27.9%), followed by sports-related mechanisms (22.9%) and falling down(17.9%). The most prevalent age group was 11-15 years-old(71.1%) and there was a 14.3% prevalence in March. Among the location of fractures, the nasal bone was the most prevalent, accounting for 82.3% of injuries, followed by the orbit(9.95%), and the mandible fractures(7.5%). Most patients(59.7%) were treated within 6-9 days after trauma and the mean hospitalization period was 8-11 days. We should follow up the surgically treated patients, and they will be further evaluated about postoperative sequele and effect on the facial bone development. These studies demonstrate differences in the demographics and clinical presentation that, if applied to patients, will enable a more accurate diagnosis and proper management.

Development of a Deep Learning-Based Automated Analysis System for Facial Vitiligo Treatment Evaluation (안면 백반증 치료 평가를 위한 딥러닝 기반 자동화 분석 시스템 개발)

  • Sena Lee;Yeon-Woo Heo;Solam Lee;Sung Bin Park
    • Journal of Biomedical Engineering Research
    • /
    • v.45 no.2
    • /
    • pp.95-100
    • /
    • 2024
  • Vitiligo is a condition characterized by the destruction or dysfunction of melanin-producing cells in the skin, resulting in a loss of skin pigmentation. Facial vitiligo, specifically affecting the face, significantly impacts patients' appearance, thereby diminishing their quality of life. Evaluating the efficacy of facial vitiligo treatment typically relies on subjective assessments, such as the Facial Vitiligo Area Scoring Index (F-VASI), which can be time-consuming and subjective due to its reliance on clinical observations like lesion shape and distribution. Various machine learning and deep learning methods have been proposed for segmenting vitiligo areas in facial images, showing promising results. However, these methods often struggle to accurately segment vitiligo lesions irregularly distributed across the face. Therefore, our study introduces a framework aimed at improving the segmentation of vitiligo lesions on the face and providing an evaluation of vitiligo lesions. Our framework for facial vitiligo segmentation and lesion evaluation consists of three main steps. Firstly, we perform face detection to minimize background areas and identify the face area of interest using high-quality ultraviolet photographs. Secondly, we extract facial area masks and vitiligo lesion masks using a semantic segmentation network-based approach with the generated dataset. Thirdly, we automatically calculate the vitiligo area relative to the facial area. We evaluated the performance of facial and vitiligo lesion segmentation using an independent test dataset that was not included in the training and validation, showing excellent results. The framework proposed in this study can serve as a useful tool for evaluating the diagnosis and treatment efficacy of vitiligo.

Face classification and analysis based on geometrical feature of face (얼굴의 기하학적 특징정보 기반의 얼굴 특징자 분류 및 해석 시스템)

  • Jeong, Kwang-Min;Kim, Jung-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.7
    • /
    • pp.1495-1504
    • /
    • 2012
  • This paper proposes an algorithm to classify and analyze facial features such as eyebrow, eye, mouth and chin based on the geometric features of the face. As a preprocessing process to classify and analyze the facial features, the algorithm extracts the facial features such as eyebrow, eye, nose, mouth and chin. From the extracted facial features, it detects the shape and form information and the ratio of distance between the features and formulated them to evaluation functions to classify 12 eyebrows types, 3 eyes types, 9 mouth types and 4 chine types. Using these facial features, it analyzes a face. The face analysis algorithm contains the information about pixel distribution and gradient of each feature. In other words, the algorithm analyzes a face by comparing such information about the features.

Comparison Analysis of Four Face Swapping Models for Interactive Media Platform COX (인터랙티브 미디어 플랫폼 콕스에 제공될 4가지 얼굴 변형 기술의 비교분석)

  • Jeon, Ho-Beom;Ko, Hyun-kwan;Lee, Seon-Gyeong;Song, Bok-Deuk;Kim, Chae-Kyu;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.5
    • /
    • pp.535-546
    • /
    • 2019
  • Recently, there have been a lot of researches on the whole face replacement system, but it is not easy to obtain stable results due to various attitudes, angles and facial diversity. To produce a natural synthesis result when replacing the face shown in the video image, technologies such as face area detection, feature extraction, face alignment, face area segmentation, 3D attitude adjustment and facial transposition should all operate at a precise level. And each technology must be able to be interdependently combined. The results of our analysis show that the difficulty of implementing the technology and contribution to the system in facial replacement technology has increased in facial feature point extraction and facial alignment technology. On the other hand, the difficulty of the facial transposition technique and the three-dimensional posture adjustment technique were low, but showed the need for development. In this paper, we propose four facial replacement models such as 2-D Faceswap, OpenPose, Deekfake, and Cycle GAN, which are suitable for the Cox platform. These models have the following features; i.e. these models include a suitable model for front face pose image conversion, face pose image with active body movement, and face movement with right and left side by 15 degrees, Generative Adversarial Network.

Design of the emotion expression in multimodal conversation interaction of companion robot (컴패니언 로봇의 멀티 모달 대화 인터랙션에서의 감정 표현 디자인 연구)

  • Lee, Seul Bi;Yoo, Seung Hun
    • Design Convergence Study
    • /
    • v.16 no.6
    • /
    • pp.137-152
    • /
    • 2017
  • This research aims to develop the companion robot experience design for elderly in korea based on needs-function deploy matrix of robot and emotion expression research of robot in multimodal interaction. First, Elder users' main needs were categorized into 4 groups based on ethnographic research. Second, the functional elements and physical actuators of robot were mapped to user needs in function- needs deploy matrix. The final UX design prototype was implemented with a robot type that has a verbal non-touch multi modal interface with emotional facial expression based on Ekman's Facial Action Coding System (FACS). The proposed robot prototype was validated through a user test session to analyze the influence of the robot interaction on the cognition and emotion of users by Story Recall Test and face emotion analysis software; Emotion API when the robot changes facial expression corresponds to the emotion of the delivered information by the robot and when the robot initiated interaction cycle voluntarily. The group with emotional robot showed a relatively high recall rate in the delayed recall test and In the facial expression analysis, the facial expression and the interaction initiation of the robot affected on emotion and preference of the elderly participants.

A Three-Dimensional Facial Modeling and Prediction System (3차원 얼굴 모델링과 예측 시스템)

  • Gu, Bon-Gwan;Jeong, Cheol-Hui;Cho, Sun-Young;Lee, Myeong-Won
    • Journal of the Korea Computer Graphics Society
    • /
    • v.17 no.1
    • /
    • pp.9-16
    • /
    • 2011
  • In this paper, we describe the development of a system for generating a 3-dimensional human face and predicting it's appearance as it ages over subsequent years using 3D scanned facial data and photo images. It is composed of 3-dimensional texture mapping functions, a facial definition parameter input tool, and 3-dimensional facial prediction algorithms. With the texture mapping functions, we can generate a new model of a given face at a specified age using a scanned facial model and photo images. The texture mapping is done using three photo images - a front and two side images of a face. The facial definition parameter input tool is a user interface necessary for texture mapping and used for matching facial feature points between photo images and a 3D scanned facial model in order to obtain material values in high resolution. We have calculated material values for future facial models and predicted future facial models in high resolution with a statistical analysis using 100 scanned facial models.

A Software Error Examination of 3D Automatic Face Recognition Apparatus(3D-AFRA) : Measurement of Facial Figure Data (3차원 안면자동인식기(3D-AFRA)의 Software 정밀도 검사 : 형상측정프로그램 오차분석)

  • Seok, Jae-Hwa;Song, Jung-Hoon;Kim, Hyun-Jin;Yoo, Jung-Hee;Kwak, Chang-Kyu;Lee, Jun-Hee;Kho, Byung-Hee;Kim, Jong-Won;Lee, Eui-Ju
    • Journal of Sasang Constitutional Medicine
    • /
    • v.19 no.3
    • /
    • pp.51-61
    • /
    • 2007
  • 1. Objectives The Face is an important standard for the classification of Sasang Constitutions. We are developing 3D Automatic Face Recognition Apparatus(3D-AFRA) to analyse the facial characteristics. This apparatus show us 3D image and data of man's face and measure facial figure data. So We should examine the Measurement of Facial Figure data error of 3D Automatic Face Recognition Apparatus(3D-AFRA) in Software Error Analysis. 2. Methods We scanned face status by using 3D Automatic Face Recognition Apparatus(3D-AFRA). And we measured lengths Between Facial Definition Parameters of facial figure data by Facial Measurement program. 2.1 Repeatability test We measured lengths Between Facial Definition Parameters of facial figure data restored by 3D-AFRA by Facial Measurement program 10 times. Then we compared 10 results each other for repeatability test. 2.2 Measurement error test We measured lengths Between Facial Definition Parameters of facial figure data by two different measurement program that are Facial Measurement program and Rapidform2006. At measuring lengths Between Facial Definition Parameters, we uses two measurement way. The one is straight line measurement, the other is curved line measurement. Then we compared results measured by Facial Measurement program with results measured by Rapidform2006. 3. Results and Conclusions In repeatability test, standard deviation of results is 0.084-0.450mm. And in straight line measurement error test, the average error 0.0582mm, and the maximum error was 0.28mm. In curved line measurement error test, the average error 0.413mm, and the maximum error was 1.53mm. In conclusion, we assessed that the accuracy and repeatability of Facial Measurement program is considerably good. From now on we complement accuracy of 3D-AFRA in Hardware and Software.

  • PDF

Facial Nerve Decompression for Facial Nerve Palsy with Temporal Bone Fracture: Analysis of 25 Cases (측두골 골절후 발생한 안면마비 환자의 안면신경감압술: 25명 환자들의 증례분석)

  • Nam, Han Ga Wi;Hwang, Hyung Sik;Moon, Seung-Myung;Shin, Il Young;Sheen, Seung Hun;Jeong, Je Hoon
    • Journal of Trauma and Injury
    • /
    • v.26 no.3
    • /
    • pp.131-138
    • /
    • 2013
  • Purpose: The aim of this study is to present a retrospective review of patients who had a sudden onset of facial palsy after trauma and who underwent facial nerve decompression. Methods: The cases of 25 patients who had traumatic facial palsy were reviewed. Facial nerve function was graded according to the House-Brackmann grading scale. According to facial nerve decompression, patients were categorized into the surgical (decompression) group, with 7 patients in the early decompression subgroup and 2 patients in the late decompression subgroup, and the conservative group(16 patients). Results: The facial nerve decompression group included 8 males and 1 female, aged 2 to 86 years old, with a mean age of 40.8. In early facial nerve decompression subgroup, facial palsy was H-B grade I to III in 6 cases (66.7%); H-B grade IV was observed in 1 case(11.1%). In late facial nerve decompression subgroup, 1 patient (11.1%) had no improvement, and the other patient(11.1%) improved to H-B grade III from H-B grade V. A comparison of patients who underwent surgery within 2 weeks to those who underwent surgery 2 weeks later did not show any significant difference in improvement of H-B grades (p>0.05). The conservative management group included 15 males and 1 female, aged 6 to 66 years old, with a mean age of 36. At the last follow up, 15 patients showed H-B grades of I to III(93.7%), and only 1 patient had an H-B grade of IV(6.3%). Conclusion: Generally, we assume that early facial nerve decompression can lead to some recovery from traumatic facial palsy, but a prospective controlled study should and will be prepared to compare of conservative treatment to late decompression.

ORTHODONTIC TREATMENT RELATED TO FACIAL PATTERNS (안모유형에 따른 교정치료)

  • Hwang, Chung-Ju
    • The korean journal of orthodontics
    • /
    • v.18 no.2
    • /
    • pp.475-488
    • /
    • 1988
  • Certain malocclusion are associated with specific "facial type," and it is important for the clinician to classify the common facial characteristic of each patient. Because the reaction to treatment mechanics and the stability of the denture is depended upon the analysis of the facial pattern. Basically, there are 3 district facial types or patterns under which almost all malocclusion can be classified. 1. mesofacial is the most average growth. 2. brachyfacial which is a horizontal growth pattern has a week muscle, with dental arch, deep bite. 3. dolichofacial which is a vertical growth pattern has a strong muscle, narrow dental arch, open bite. Brachyfacial pattern show a resistant to mandibular rotation during treatment can accept a more protrusive denture and are prominantly nonextraction, whereas dolichofacial patterns tend to open during treatment require a more retracted denture in order to assure post-treatment stability. Brachyfacial pattern would better treat to use extrusive force system, whereas dolichofacial pattern treat to use intrusive force system with head gear and intermaxillary elastics.

  • PDF

Facial Feature Tracking from a General USB PC Camera (범용 USB PC 카메라를 이용한 얼굴 특징점의 추적)

  • 양정석;이칠우
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10b
    • /
    • pp.412-414
    • /
    • 2001
  • In this paper, we describe an real-time facial feature tracker. We only used a general USB PC Camera without a frame grabber. The system has achieved a rate of 8+ frames/second without any low-level library support. It tracks pupils, nostrils and corners of the lip. The signal from USB Camera is YUV 4:2:0 vertical Format. we converted the signal into RGB color model to display the image and We interpolated V channel of the signal to be used for extracting a facial region. and we analysis 2D blob features in the Y channel, the luminance of the image with geometric restriction to locate each facial feature within the detected facial region. Our method is so simple and intuitive that we can make the system work in real-time.

  • PDF