• Title/Summary/Keyword: Facial images

Search Result 632, Processing Time 0.027 seconds

Lip Shape Synthesis of the Korean Syllable for Human Interface (휴먼인터페이스를 위한 한글음절의 입모양합성)

  • 이용동;최창석;최갑석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.4
    • /
    • pp.614-623
    • /
    • 1994
  • Synthesizing speech and facial images is necessary for human interface that man and machine converse naturally as human do. The target of this paper is synthesizing the facial images. In synthesis of the facial images a three-dimensional (3-D) shape model of the face is used for realizating the facial expression variations and the lip shape variations. The various facial expressions and lip shapes harmonized with the syllables are synthesized by deforming the three-dimensional model on the basis of the facial muscular actions. Combications with the consonants and the vowels make 14.364 syllables. The vowels dominate most lip shapes but the consonants do a part of them. For determining the lip shapes, this paper investigates all the syllables and classifies the lip shapes pattern according to the vowels and the consonants. As the results, the lip shapes are classified into 8 patterns for the vowels and 2patterns for the consonants. In advance, the paper determines the synthesis rules for the classified lip shape patterns. This method permits us to obtain the natural facial image with the various facial expressions and lip shape patterns.

  • PDF

Can a spontaneous smile invalidate facial identification by photo-anthropometry?

  • Pinto, Paulo Henrique Viana;Rodrigues, Caio Henrique Pinke;Rozatto, Juliana Rodrigues;da Silva, Ana Maria Bettoni Rodrigues;Bruni, Aline Thais;da Silva, Marco Antonio Moreira Rodrigues;da Silva, Ricardo Henrique Alves
    • Imaging Science in Dentistry
    • /
    • v.51 no.3
    • /
    • pp.279-290
    • /
    • 2021
  • Purpose: Using images in the facial image comparison process poses a challenge for forensic experts due to limitations such as the presence of facial expressions. The aims of this study were to analyze how morphometric changes in the face during a spontaneous smile influence the facial image comparison process and to evaluate the reproducibility of measurements obtained by digital stereophotogrammetry in these situations. Materials and Methods: Three examiners used digital stereophotogrammetry to obtain 3-dimensional images of the faces of 10 female participants(aged between 23 and 45 years). Photographs of the participants' faces were captured with their faces at rest (group 1) and with a spontaneous smile (group 2), resulting in a total of 60 3-dimensional images. The digital stereophotogrammetry device obtained the images with a 3.5-ms capture time, which prevented undesirable movements of the participants. Linear measurements between facial landmarks were made, in units of millimeters, and the data were subjected to multivariate and univariate statistical analyses using Pirouette® version 4.5 (InfoMetrix Inc., Woodinville, WA, USA) and Microsoft Excel® (Microsoft Corp., Redmond, WA, USA), respectively. Results: The measurements that most strongly influenced the separation of the groups were related to the labial/buccal region. In general, the data showed low standard deviations, which differed by less than 10% from the measured mean values, demonstrating that the digital stereophotogrammetry technique was reproducible. Conclusion: The impact of spontaneous smiles on the facial image comparison process should be considered, and digital stereophotogrammetry provided good reproducibility.

Artificial Intelligence for Assistance of Facial Expression Practice Using Emotion Classification (감정 분류를 이용한 표정 연습 보조 인공지능)

  • Dong-Kyu, Kim;So Hwa, Lee;Jae Hwan, Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1137-1144
    • /
    • 2022
  • In this study, an artificial intelligence(AI) was developed to help with facial expression practice in order to express emotions. The developed AI used multimodal inputs consisting of sentences and facial images for deep neural networks (DNNs). The DNNs calculated similarities between the emotions predicted by the sentences and the emotions predicted by facial images. The user practiced facial expressions based on the situation given by sentences, and the AI provided the user with numerical feedback based on the similarity between the emotion predicted by sentence and the emotion predicted by facial expression. ResNet34 structure was trained on FER2013 public data to predict emotions from facial images. To predict emotions in sentences, KoBERT model was trained in transfer learning manner using the conversational speech dataset for emotion classification opened to the public by AIHub. The DNN that predicts emotions from the facial images demonstrated 65% accuracy, which is comparable to human emotional classification ability. The DNN that predicts emotions from the sentences achieved 90% accuracy. The performance of the developed AI was evaluated through experiments with changing facial expressions in which an ordinary person was participated.

The Improving Method of Facial Recognition Using the Genetic Algorithm (유전자 알고리즘에 의한 얼굴인식성능의 향상 방안)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.11 no.1
    • /
    • pp.95-105
    • /
    • 2005
  • As the security system using facial recognition, the recognition performance depends on the environments (e. g. face expression, hair style, age and make-up etc.) For the revision of easily changeable environment, it's generally used to set up the threshold, replace the face image which covers the threshold into images already registered, and update the face images additionally. However, this usage has the weakness of inaccuracy matching results or can easily active by analogous face images. So, we propose the genetic algorithm which absorbs greatly the facial similarity degree and the recognition target variety, and has excellence studying capacity to avoid registering inaccuracy. We experimented variable and similar face images (each 30 face images per one, total 300 images) and performed inherent face images based on ingredient analysis as face recognition technique. The proposed method resulted in not only the recognition improvement of a dominant gene but also decreasing the reaction rate to a recessive gene.

  • PDF

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

Measurement of facial soft tissues thickness using 3D computed tomographic images (3차원 전산화단층찰영 영상을 이용한 얼굴 연조직 두께 계측)

  • Jeong Ho-Gul;Kim Kee-Deog;Han Seung-Ho;Shin Dong-Won;Hu Kyung-Seok;Lee Jae-Bum;Park Hyok;Park Chang-Seo
    • Imaging Science in Dentistry
    • /
    • v.36 no.1
    • /
    • pp.49-54
    • /
    • 2006
  • Purpose : To evaluate accuracy and reliability of program to measure facial soft tissue thickness using 3D computed tomographic images by comparing with direct measurement. Materials and Methods : One cadaver was scanned with a Helical CT with 3 mm slice thickness and 3 mm/sec table speed. The acquired data was reconstructed with 1.5 mm reconstruction interval and the images were transferred to a personal computer. The facial soft tissue thickness were measured using a program developed newly in 3D image. For direct measurement, the cadaver was cut with a bone cutter and then a ruler was placed above the cut side. The procedure was followed by taking pictures of the facial soft tissues with a high-resolution digital camera. Then the measurements were done in the photographic images and repeated for ten times. A repeated measure analysis of variance was adopted to compare and analyze the measurements resulting from the two different methods. Comparison according to the areas was analyzed by Mann-Whitney test. Results : There were no statistically significant differences between the direct measurements and those using the 3D images (p>0.05). There were statistical differences in the measurements on 17 points but all the points except 2 points showed a mean difference of 0.5 mm or less. Conclusion : The developed software program to measure the facial soft tissue thickness using 3D images was so accurate that it allows to measure facial soft tissues thickness more easily in forensic science and anthropology.

  • PDF

Facial Feature Localization from 3D Face Image using Adjacent Depth Differences (인접 부위의 깊이 차를 이용한 3차원 얼굴 영상의 특징 추출)

  • 김익동;심재창
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.617-624
    • /
    • 2004
  • This paper describes a new facial feature localization method that uses Adjacent Depth Differences(ADD) in 3D facial surface. In general, human recognize the extent of deepness or shallowness of region relatively, in depth, by comparing the neighboring depth information among regions of an object. The larger the depth difference between regions shows, the easier one can recognize each region. Using this principal, facial feature extraction will be easier, more reliable and speedy. 3D range images are used as input images. And ADD are obtained by differencing two range values, which are separated at a distance coordinate, both in horizontal and vertical directions. ADD and input image are analyzed to extract facial features, then localized a nose region, which is the most prominent feature in 3D facial surface, effectively and accurately.

METHODS OF EYEBROW REGION EXTRACRION AND MOUTH DETECTION FOR FACIAL CARICATURING SYSTEM PICASSO-2 EXHIBITED AT EXPO2005

  • Tokuda, Naoya;Fujiwara, Takayuki;Funahashi, Takuma;Koshimizu, Hiroyasu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.425-428
    • /
    • 2009
  • We have researched and developed the caricature generation system PICASSO. PICASSO outputs the deformed facial caricature by comparing input face with prepared mean face. We specialized it as PICASSO-2 for exhibiting a robot at Aichi EXPO2005. This robot enforced by PICASSO-2 drew a facial caricature on the shrimp rice cracker with the laser pen. We have been recently exhibiting another revised robot characterized by a brush drawing. This system takes a couple of facial images with CCD camera, extracts the facial features from the images, and generates the facial caricature in real time. We experimentally evaluated the performance of the caricatures using a lot of data taken in Aichi EXPO2005. As a result it was obvious that this system were not sufficient in accuracy of eyebrow region extraction and mouth detection. In this paper, we propose the improved methods for eyebrow region extraction and mouth detection.

  • PDF

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

Soft tissue evaluation using 3-dimensional face image after maxillary protraction therapy (3차원 얼굴 영상을 이용한 상악 전방견인 치료 후의 연조직 평가)

  • Choi, Dong-Soon;Lee, Kyoung-Hoon;Jang, Insan;Cha, Bong-Kuen
    • The Journal of the Korean dental association
    • /
    • v.54 no.3
    • /
    • pp.217-229
    • /
    • 2016
  • Purpose: The aim of this study was to evaluate the soft-tissue change after the maxillary protraction therapy using threedimensional (3D) facial images. Materials and Methods: This study used pretreatment (T1) and posttreatment (T2) 3D facial images from thirteen Class III malocclusion patients (6 boys and 7 girls; mean age, $8.9{\pm}2.2years$) who received maxillary protraction therapy. The facial images were taken using the optical scanner (Rexcan III 3D scanner), and T1 and T2 images were superimposed using forehead area as a reference. The soft-tissue changes after the treatment (T2-T1) were three-dimensionally calculated using 15 soft-tissue landmarks and 3 reference planes. Results: Anterior movements of the soft-tissue were observed on the pronasale, subnasale, nasal ala, soft-tissue zygoma, and upper lip area. Posterior movements were observed on the lower lip, soft-tissue B-point, and soft-tissue gnathion area. Vertically, most soft-tissue landmarks moved downward at T2. In transverse direction, bilateral landmarks, i.e. exocanthion, zygomatic point, nasal ala, and cheilion moved more laterally at T2. Conclusion: Facial soft-tissue of Class III malocclusion patients was changed three-dimensionally after maxillary protraction therapy. Especially, the facial profile was improved by forward movement of midface and downward and backward movement of lower face.

  • PDF