• Title/Summary/Keyword: Facial Information

Search Result 1,060, Processing Time 0.026 seconds

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

Discrimination of Emotional States In Voice and Facial Expression

  • Kim, Sung-Ill;Yasunari Yoshitomi;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2E
    • /
    • pp.98-104
    • /
    • 2002
  • The present study describes a combination method to recognize the human affective states such as anger, happiness, sadness, or surprise. For this, we extracted emotional features from voice signals and facial expressions, and then trained them to recognize emotional states using hidden Markov model (HMM) and neural network (NN). For voices, we used prosodic parameters such as pitch signals, energy, and their derivatives, which were then trained by HMM for recognition. For facial expressions, on the other hands, we used feature parameters extracted from thermal and visible images, and these feature parameters were then trained by NN for recognition. The recognition rates for the combined parameters obtained from voice and facial expressions showed better performance than any of two isolated sets of parameters. The simulation results were also compared with human questionnaire results.

Accurate Registration Method of 3D Facial Scan Data and CBCT Data using Distance Map (거리맵을 이용한 3차원 얼굴 스캔 데이터와 CBCT 데이터의 정확한 정합 기법)

  • Lee, Jeongjin
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.10
    • /
    • pp.1157-1163
    • /
    • 2015
  • In this paper, we propose a registration method of 3d facial scan data and CBCT data using voxelization and distance map. First, two data sets are initially aligned by exploiting the voxelization of 3D facial scan data and the information of the center of mass. Second, a skin surface is extracted from 3D CBCT data by segmenting air and skin regions. Third, the positional and rotational differences between two images are accurately aligned by performing the rigid registration for the distance minimization of two skin surfaces. Experimental results showed that proposed registration method correctly aligned 3D facial scan data and CBCT data for ten patients. Our registration method might give useful clinical information for the oral surgery planning and the diagnosis of the treatment effects after an oral surgery.

Facial Feature Tracking from a General USB PC Camera (범용 USB PC 카메라를 이용한 얼굴 특징점의 추적)

  • 양정석;이칠우
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10b
    • /
    • pp.412-414
    • /
    • 2001
  • In this paper, we describe an real-time facial feature tracker. We only used a general USB PC Camera without a frame grabber. The system has achieved a rate of 8+ frames/second without any low-level library support. It tracks pupils, nostrils and corners of the lip. The signal from USB Camera is YUV 4:2:0 vertical Format. we converted the signal into RGB color model to display the image and We interpolated V channel of the signal to be used for extracting a facial region. and we analysis 2D blob features in the Y channel, the luminance of the image with geometric restriction to locate each facial feature within the detected facial region. Our method is so simple and intuitive that we can make the system work in real-time.

  • PDF

Text-driven Speech Animation with Emotion Control

  • Chae, Wonseok;Kim, Yejin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3473-3487
    • /
    • 2020
  • In this paper, we present a new approach to creating speech animation with emotional expressions using a small set of example models. To generate realistic facial animation, two example models called key visemes and expressions are used for lip-synchronization and facial expressions, respectively. The key visemes represent lip shapes of phonemes such as vowels and consonants while the key expressions represent basic emotions of a face. Our approach utilizes a text-to-speech (TTS) system to create a phonetic transcript for the speech animation. Based on a phonetic transcript, a sequence of speech animation is synthesized by interpolating the corresponding sequence of key visemes. Using an input parameter vector, the key expressions are blended by a method of scattered data interpolation. During the synthesizing process, an importance-based scheme is introduced to combine both lip-synchronization and facial expressions into one animation sequence in real time (over 120Hz). The proposed approach can be applied to diverse types of digital content and applications that use facial animation with high accuracy (over 90%) in speech recognition.

A Gaze Tracking based on the Head Pose in Computer Monitor (얼굴 방향에 기반을 둔 컴퓨터 화면 응시점 추적)

  • 오승환;이희영
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.227-230
    • /
    • 2002
  • In this paper we concentrate on overall direction of the gaze based on a head pose for human computer interaction. To decide a gaze direction of user in a image, it is important to pick up facial feature exactly. For this, we binarize the input image and search two eyes and the mouth through the similarity of each block ( aspect ratio, size, and average gray value ) and geometric information of face at the binarized image. We create a imaginary plane on the line made by features of the real face and the pin hole of the camera to decide the head orientation. We call it the virtual facial plane. The position of a virtual facial plane is estimated through projected facial feature on the image plane. We find a gaze direction using the surface normal vector of the virtual facial plane. This study using popular PC camera will contribute practical usage of gaze tracking technology.

  • PDF

Robust Three-step facial landmark localization under the complicated condition via ASM and POEM

  • Li, Weisheng;Peng, Lai;Zhou, Lifang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.9
    • /
    • pp.3685-3700
    • /
    • 2015
  • To avoid influences caused by pose, illumination and facial expression variations, we propose a robust three-step algorithm based on ASM and POEM for facial landmark localization. Firstly, Model Selection Factor is utilized to achieve a pose-free initialized shape. Then, we use the global shape model of ASM to describe the whole face and the texture model POEM to adjust the position of each landmark. Thirdly, a second localization is presented to discriminatively refine the subtle shape variation for some organs and contours. Experiments are conducted in four main face datasets, and the results demonstrate that the proposed method accurately localizes facial landmarks and outperforms other state-of-the-art methods.

A Study on Detecting Glasses in Facial Image

  • Jung, Sung-Gi;Paik, Doo-Won;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.12
    • /
    • pp.21-28
    • /
    • 2015
  • In this paper, we propose a method of glasses detection in facial image. we develop a detection method of glasses with a weighted sum of the results that detected by facial element detection and glasses frame candidate region. Component of the face detection method detects the glasses, by defining the detection probability of the glasses according to the detection of a face component. Method using the candidate region of the glasses frame detects the glasses, by defining feature of the glasses frame in the candidate region. finally, The results of the combined weight of both methods are obtained. The proposed method in this paper is expected to increase security system's recognition on facial accessories by raising detection performance of glasses or sunglasses for using ATM.

Development of Character Input System using Facial Muscle Signal and Minimum List Keyboard (안면근 신호를 이용한 최소 자판 문자 입력 시스템의 개발)

  • Kim, Hong-Hyun;Park, Hyun-Seok;Kim, Eung-Soo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.289-292
    • /
    • 2009
  • A person does communication between each other using language. But In the case of disabled person can not communication own idea to use writing and gesture. Therefore, In this paper, we embodied communication system using the facial muscle signals so that disabled person can do communication. Especially, After feature extraction of the EEG included facial muscle, it is converted the facial muscle into control signal, and then select character and communicate using a minimum list keyboard.

  • PDF

Improved STGAN for Facial Attribute Editing by Utilizing Mask Information

  • Yang, Hyeon Seok;Han, Jeong Hoon;Moon, Young Shik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.1-9
    • /
    • 2020
  • In this paper, we propose a model that performs more natural facial attribute editing by utilizing mask information in the hair and hat region. STGAN, one of state-of-the-art research of facial attribute editing, has shown results of naturally editing multiple facial attributes. However, editing hair-related attributes can produce unnatural results. The key idea of the proposed method is to additionally utilize information on the face regions that was lacking in the existing model. To do this, we apply three ideas. First, hair information is supplemented by adding hair ratio attributes through masks. Second, unnecessary changes in the image are suppressed by adding cycle consistency loss. Third, a hat segmentation network is added to prevent hat region distortion. Through qualitative evaluation, the effectiveness of the proposed method is evaluated and analyzed. The method proposed in the experimental results generated hair and face regions more naturally and successfully prevented the distortion of the hat region.