• Title/Summary/Keyword: Facial movement

Search Result 263, Processing Time 0.024 seconds

A Case Report of Miller-Fisher Syndrome with Ophthalmoplegia and Facial Palsy (양안의 완전 외안근마비와 편측 안면마비를 동반한 밀러-피셔 증후군 환자 치험 1례)

  • Ji-Min Choi;In-Jeong Jo;Seok-Hun Hong
    • The Journal of Korean Medicine Ophthalmology and Otolaryngology and Dermatology
    • /
    • v.37 no.3
    • /
    • pp.84-98
    • /
    • 2024
  • Objective : The purpose of this study is to report that the effect of Korean medical treatments on Miller-Fisher Syndrome with ophtalmoplegia and facial palsy. Methods : We treated a 69-year-old female diagnosed with Miller-Fisher syndrome with ophthalmoplegia, right facial palsy and other symptoms. She received Korean medical treatments such as herbal medicine(Gamiboik-tang), cupping therapy and acupuncture(including pharmacopuncture). The severity of ophthalmoplegia was evaluated by length of the eyeball movement and Scott and Kraft score. The severity of facial palsy was evaluated by Yanagihara score and the severity of other symptoms such as diplopia, dizziness, gait disturbance and neck&shoulder pain was evaluated by VAS. Results : Each neurological symptoms were improved after Korean medical treatments. In case of ophthalmop legia, Scott and Kraft score increase from -4 to 0. There were no restrictions on eye movements. In case of facial palsy, Yanagihara score increased from 10 to 40. Also, other symptoms such as diplopia, dizziness, gait disturbance and neck&shoulder pain was improved. Conclusions : This case report suggests that Korean medical treatments can be effective for Miller-Fisher Syndrome patient with Ophthalmoplegia and Facial Palsy.

Face Tracking System Using Updated Skin Color (업데이트된 피부색을 이용한 얼굴 추적 시스템)

  • Ahn, Kyung-Hee;Kim, Jong-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.5
    • /
    • pp.610-619
    • /
    • 2015
  • *In this paper, we propose a real-time face tracking system using an adaptive face detector and a tracking algorithm. An image is divided into the regions of background and face candidate by a real-time updated skin color identifying system in order to accurately detect facial features. The facial characteristics are extracted using the five types of simple Haar-like features. The extracted features are reinterpreted by Principal Component Analysis (PCA), and the interpreted principal components are processed by Support Vector Machine (SVM) that classifies into facial and non-facial areas. The movement of the face is traced by Kalman filter and Mean shift, which use the static information of the detected faces and the differences between previous and current frames. The proposed system identifies the initial skin color and updates it through a real-time color detecting system. A similar background color can be removed by updating the skin color. Also, the performance increases up to 20% when the background color is reduced in comparison to extracting features from the entire region. The increased detection rate and speed are acquired by the usage of Kalman filter and Mean shift.

Automatic Synchronization of Separately-Captured Facial Expression and Motion Data (표정과 동작 데이터의 자동 동기화 기술)

  • Jeong, Tae-Wan;Park, Sang-II
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.1
    • /
    • pp.23-28
    • /
    • 2012
  • In this paper, we present a new method for automatically synchronize captured facial expression data with its corresponding motion data. In a usual optical motion capture set-up, a detailed facial expression can not be captured simultaneously in the motion capture session because its resolution requirement is higher than that of the motion capture. Therefore, those are captured in two separate sessions and need to be synchronized in the post-process to be used for generating a convincing character animation. Based on the patterns of the actor's neck movement extracted from those two data, we present a non-linear time warping method for the automatic synchronization. We justify our method with the actual examples to show the viability of the method.

Complete denture rehabilitation of a fully edentulous patient with unilateral facial nerve palsy: A case report (편측성 안면 신경마비 환자에서의 총의치 수복 증례)

  • Choi, Eunyoung;Lee, Ji-Hyoun;Choi, Sunyoung
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.55 no.4
    • /
    • pp.451-457
    • /
    • 2017
  • Bell's palsy is an acute-onset unilateral peripheral facial neuropathy. For patients with sequelae of facial paresis, the successful rehabilitation of fully edentulous arches is challenging. This case report described the treatment procedures and clinical considerations to fabricate complete dentures of a patient who showed unilateral displacement of mandible, unilateral chewing pattern and parafunctional jaw movement due to sequelae of Bell's palsy. Gothic arch tracing was used to record reproducible centric relation and lingualized occlusion was performed to provide freedom to move between centric relation and the patient's habitual functional area in fabricating satisfactory dentures in terms of function and esthetics.

Möbius Syndrome Demonstrated by the High-Resolution MR Imaging: a Case Report and Review of Literature

  • Hwang, Minhee;Baek, Hye Jin;Ryu, Kyeong Hwa;Choi, Bo Hwa;Ha, Ji Young;Do, Hyun Jung
    • Investigative Magnetic Resonance Imaging
    • /
    • v.23 no.2
    • /
    • pp.167-171
    • /
    • 2019
  • $M\ddot{o}bius$ syndrome is a rare congenital condition, characterized by abducens and facial nerve palsy, resulting in limitation of lateral gaze movement and facial diplegia. However, to our knowledge, there have been few studies on evaluation of cranial nerves, on MR imaging in $M\ddot{o}bius$ syndrome. Herein, we describe a rare case of $M\ddot{o}bius$ syndrome representing limitation of lateral gaze, and weakness of facial expression, since the neonatal period. In this case, high-resolution MR imaging played a key role in diagnosing $M\ddot{o}bius$ syndrome, by direct visualization of corresponding cranial nerves abnormalities.

Comparative Analysis of Facial Animation Production by Digital Actors - Keyframe Animation and Mobile Capture Animation

  • Choi, Chul Young
    • International journal of advanced smart convergence
    • /
    • v.13 no.3
    • /
    • pp.176-182
    • /
    • 2024
  • Looking at the recent game market, classic games released in the past are being re-released with high-quality visuals, and users are generally satisfied. It can be said that the realization of realistic digital actors, which was not possible in the past, is now becoming a reality. Epic Games launched the MetaHuman Creator website in September 2021, allowing anyone to easily create realistic human characters. Since then, the number of animations created using MetaHumans has been increasing. As the characters become more realistic, the movement and expression animations expected by the audience must also be convincingly realized. Until recently, traditional methods were the primary approach for producing realistic character animations. For facial animation, Epic Games introduced an improved method on the Live Link app in 2023, which provides the highest quality among mobile-based techniques. In this context, this paper compares the results of animation produced using both keyframe facial capture and mobile-based capture. After creating an emotional expression animation with four sentences, the results were compared using Unreal Engine. While the facial capture method is more natural and easier to use, the precise and exaggerated expressions possible with the keyframe method cannot be overlooked, suggesting that a hybrid approach using both methods will likely continue for the foreseeable future.

A CEPHALOMETRIC STUDY ON THE HARD AND SOFT TISSUE CHANGES BY THE PAPID PALATAL EXPANSION IN ANGLE'S CLASS III MALOCCLUSION (상악골 급속확장에 의한 Angle씨 제 III급 부정교합 환자의 안모형태 변화에 관한 두부방사선 계측학적 연구)

  • Tahk, Seon Gun;Ryu, Young Kyu
    • The korean journal of orthodontics
    • /
    • v.14 no.1
    • /
    • pp.161-172
    • /
    • 1984
  • This study was undertaken to evaluate the cephalometric changes of the soft tissue and skeletal profile subsequent 10 the rapid palatal expansion in 25 Angle's Class III cases, ranging in age from six to fifteen years, with cross-bite of the anterior teeth, underdevelopment of maxilla and facial disharmony Following results were obtained: 1. ANS moved downward, Point A presented forward & downward movement increasing SNA and Point B presented backward & downward movement decreasing SNB. 2. Mandible was rotated to backward & forward and maxilla moved forward & downward with the bite opening and improvement of anterior teeth cross-bite. 3. Soft tissue on mandible was rotated to backward & forward following hard tissue changes causing the decrease of facial convexity angle and backward & downward rotation of Point B', Pog'. 4. Response of the upper lip was more significant in downward than forward direction, and correlated with the upper central incisor and mandible rotation. 5. Response of the lower lip was more significant in downward than backward direction, and correlated with the mandible rotation. 6. There was a rather high degree of correlation between skeletal profile and soft-tissue profile, 1 : LS, $\bar{1}$:Pog', Pog:LS, Pog:LI, Pog:Pog' in horizontal measurements and $\bar{1}$:Pog', Pog:LI, Pog:Pog' in vertical measurements.

  • PDF

Face Detection using Color Information and AdaBoost Algorithm (색상정보와 AdaBoost 알고리즘을 이용한 얼굴검출)

  • Na, Jong-Won;Kang, Dae-Wook;Bae, Jong-Sung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.5
    • /
    • pp.843-848
    • /
    • 2008
  • Most of face detection technique uses information from the face of the movement. The traditional face detection method is to use difference picture method ate used to detect movement. However, most do not consider this mathematical approach using real-time or real-time implementation of the algorithm is complicated, not easy. This paper, the first to detect real-time facial image is converted YCbCr and RGB video input. Next, you convert the difference between video images of two adjacent to obtain and then to conduct Glassfire Labeling. Labeling value compared to the threshold behavior Area recognizes and converts video extracts. Actions to convert video to conduct face detection, and detection of facial characteristics required for the extraction and use of AdaBoost algorithm.

Gaze Detection System by Wide and Narrow View Camera (광각 및 협각 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.12C
    • /
    • pp.1239-1249
    • /
    • 2003
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Previous gaze detection system uses a wide view camera, which can capture the whole face of user. However, the image resolution is too low with such a camera and the fine movements of user's eye cannot be exactly detected. So, we implement the gaze detection system with a wide view camera and a narrow view camera. In order to detect the position of user's eye changed by facial movements, the narrow view camera has the functionalities of auto focusing and auto pan/tilt based on the detected 3D facial feature positions. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 3.1 cm of RMS error in case of Permitting facial movements and 3.57 cm in case of permitting facial and eye movement. The processing time is so short as to be implemented in real-time system(below 30 msec in Pentium -IV 1.8 GHz)

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.