• Title/Summary/Keyword: Facial Capture

Search Result 58, Processing Time 0.029 seconds

Realtime Facial Expression Control of 3D Avatar by Isomap of Motion Data (모션 데이터에 Isomap을 사용한 3차원 아바타의 실시간 표정 제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.3
    • /
    • pp.9-16
    • /
    • 2007
  • This paper describe methodology that is distributed on 2-dimensional plane to much high-dimensional facial motion datas using Isomap algorithm, and user interface techniques to control facial expressions by selecting expressions while user navigates this space in real-time. Isomap algorithm is processed of three steps as follow; first define an adjacency expression of each expression data, and second, calculate manifold distance between each expressions and composing expression spaces. These facial spaces are created by calculating of the shortest distance(manifold distance) between two random expressions. We have taken a Floyd algorithm for it. Third, materialize multi-dimensional expression spaces using Multidimensional Scaling, and project two dimensions plane. The smallest adjacency distance to define adjacency expressions uses Pearson Correlation Coefficient. Users can control facial expressions of 3-dimensional avatar by using user interface while they navigates two dimension spaces by real-time.

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.

Can a spontaneous smile invalidate facial identification by photo-anthropometry?

  • Pinto, Paulo Henrique Viana;Rodrigues, Caio Henrique Pinke;Rozatto, Juliana Rodrigues;da Silva, Ana Maria Bettoni Rodrigues;Bruni, Aline Thais;da Silva, Marco Antonio Moreira Rodrigues;da Silva, Ricardo Henrique Alves
    • Imaging Science in Dentistry
    • /
    • v.51 no.3
    • /
    • pp.279-290
    • /
    • 2021
  • Purpose: Using images in the facial image comparison process poses a challenge for forensic experts due to limitations such as the presence of facial expressions. The aims of this study were to analyze how morphometric changes in the face during a spontaneous smile influence the facial image comparison process and to evaluate the reproducibility of measurements obtained by digital stereophotogrammetry in these situations. Materials and Methods: Three examiners used digital stereophotogrammetry to obtain 3-dimensional images of the faces of 10 female participants(aged between 23 and 45 years). Photographs of the participants' faces were captured with their faces at rest (group 1) and with a spontaneous smile (group 2), resulting in a total of 60 3-dimensional images. The digital stereophotogrammetry device obtained the images with a 3.5-ms capture time, which prevented undesirable movements of the participants. Linear measurements between facial landmarks were made, in units of millimeters, and the data were subjected to multivariate and univariate statistical analyses using Pirouette® version 4.5 (InfoMetrix Inc., Woodinville, WA, USA) and Microsoft Excel® (Microsoft Corp., Redmond, WA, USA), respectively. Results: The measurements that most strongly influenced the separation of the groups were related to the labial/buccal region. In general, the data showed low standard deviations, which differed by less than 10% from the measured mean values, demonstrating that the digital stereophotogrammetry technique was reproducible. Conclusion: The impact of spontaneous smiles on the facial image comparison process should be considered, and digital stereophotogrammetry provided good reproducibility.

Facial Expression Control of 3D Avatar by Hierarchical Visualization of Motion Data (모션 데이터의 계층적 가시화에 의한 3차원 아바타의 표정 제어)

  • Kim, Sung-Ho;Jung, Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.277-284
    • /
    • 2004
  • This paper presents a facial expression control method of 3D avatar that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically. Our system creates the facial expression spare from about 2,400 captured facial frames. But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 11 clusters from the space of 2,400 facial expressions. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. When the user zooms in (zoom is discrete), it means that the user wants to see mort details. So, the system creates more clusters for the new level of zoom-in. Every time the level of zoom-in increases, the system doubles the number of clusters. The user selects new key frames along the navigation path of the previous level. At the maximum zoom-in, the user completes facial expression control specification. At the maximum, the user can go back to previous level by zooming out, and update the navigation path. We let users use the system to control facial expression of 3D avatar, and evaluate the system based on the results.

The Multi-marker Tracking for Facial Animation (Facial Animation을 위한 다중 마커의 추적)

  • 이문희;김철기;김경석
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2001.06a
    • /
    • pp.553-557
    • /
    • 2001
  • 얼굴 표정을 애니메이션하는 것은 얼굴 구조의 복잡성과 얼굴 표면의 섬세한 움직임으로 인해 컴퓨터 애니메이션 분야에서 가장 어려운 분야로 인식되고 있다. 최근 3D 애니메이션, 영화 특수효과 그리고 게임 제작시 모션 캡처 시스템(Motion Capture System)을 통하여 실제 인간의 동작 및 얼굴 표정을 수치적으로 측정해내어 이를 실제 애니메이션에 직접 사용함으로써 막대한 작업시간 및 인력 그리고 자본을 획기적으로 줄이고 있다. 그러나 기존의 모션 캡처 시스템은 고속 카메라를 이용함으로써 가격이 고가이고 움직임 추적에서도 여러 가지 문제점을 가지고 있다. 본 논문에서는 일반 저가의 카메라와 신경회로망 및 영상처리기법을 이용하여 얼굴 애니메이션용 모션 캡처 시스템에 적응할 수 있는 경제적이고 효율적인 얼굴 움직임 추적기법을 제안한다.

  • PDF

Face-to-face Communication in Cyberspace using Analysis and Synthesis of Facial Expression

  • Shigeo Morishima
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.111-118
    • /
    • 1999
  • Recently computer can make cyberspace to walk through by an interactive virtual reality technique. An a avatar in cyberspace can bring us a virtual face-to-face communication environment. In this paper, an avatar is realized which has a real face in cyberspace and a multiuser communication system is constructed by voice transmitted through network. Voice from microphone is transmitted and analyzed, then mouth shape and facial expression of avatar are synchronously estimated and synthesized on real time. And also an entertainment application of a real-time voice driven synthetic face is introduced and this is an example of interactive movie. Finally, face motion capture system using physics based face model is introduced.

Hierarchical Visualization of the Space of Facial Expressions (얼굴 표정공간의 계층적 가시화)

  • Kim Sung-Ho;Jung Moon-Ryul
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.12
    • /
    • pp.726-734
    • /
    • 2004
  • This paper presents a facial animation method that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically Our system creates the facial expression space from about 2400 captured facial frames. To represent the state of each expression, we use the distance matrix that represents the distance between pairs of feature points on the face. The shortest trajectories are found by dynamic programming. The space of facial expressions is multidimensional. To navigate this space, we visualize the space of expressions in 2D space by using the multidimensional scaling(MDS). But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 10 clusters from the space of 2400 facial expressions. Every tine the level increases, the system doubles the number of clusters. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. The user selects new key frames along the navigation path of the previous level. At the maximum level, the user completes key frame specification. We let animators use the system to create example animations, and evaluate the system based on the results.

Facial Expression Animation which Applies a Motion Data in the Vector based Caricature (벡터 기반 캐리커처에 모션 데이터를 적용한 얼굴 표정 애니메이션)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.90-98
    • /
    • 2010
  • This paper describes methodology which enables user in order to generate facial expression animation of caricature which applies a facial motion data in the vector based caricature. This method which sees was embodied with the plug-in of illustrator. And It is equipping the user interface of separate way. The data which is used in experiment attaches 28 small-sized markers in important muscular part of the actor face and captured the multiple many expression which is various with Facial Tracker. The caricature was produced in the bezier curve form which has a respectively control point from location of the important marker which attaches in the face of the actor when motion capturing to connection with motion data and the region which is identical. The facial motion data compares in the caricature and the spatial scale went through a motion calibration process too because of size. And with the user letting the control did possibly at any time. In order connecting the caricature and the markers also, we did possibly with the click the corresponding region of the caricature, after the user selects each name of the face region from the menu. Finally, this paper used a user interface of illustrator and in order for the caricature facial expression animation generation which applies a facial motion data in the vector based caricature to be possible.

Where to spot: individual identification of leopard cats (Prionailurus bengalensis euptilurus) in South Korea

  • Park, Heebok;Lim, Anya;Choi, Tae-Young;Baek, Seung-Yoon;Song, Eui-Geun;Park, Yung Chul
    • Journal of Ecology and Environment
    • /
    • v.43 no.4
    • /
    • pp.385-389
    • /
    • 2019
  • Knowledge of abundance, or population size, is fundamental in wildlife conservation and management. Camera-trapping, in combination with capture-recapture methods, has been extensively applied to estimate abundance and density of individually identifiable animals due to the advantages of being non-invasive, effective to survey wide-ranging, elusive, or nocturnal species, operating in inhospitable environment, and taking low labor. We assessed the possibility of using coat patterns from images to identify an individual leopard cat (Prionailurus bengalensis), a Class II endangered species in South Korea. We analyzed leopard cat images taken from Digital Single-Lense Relfex camera (high resolution, 18Mpxl) and camera traps (low resolution, 3.1Mpxl) using HotSpotter, an image matching algorithm. HotSpotter accurately top-ranked an image of the same individual leopard cat with the reference leopard cat image 100% by matching facial and ventral parts. This confirms that facial and ventral fur patterns of the Amur leopard cat are good matching points to be used reliably to identify an individual. We anticipate that the study results will be useful to researchers interested in studying behavior or population parameter estimates of Amur leopard cats based on capture-recapture models.

Object Segmentation for Image Transmission Services and Facial Characteristic Detection based on Knowledge (화상전송 서비스를 위한 객체 분할 및 지식 기반 얼굴 특징 검출)

  • Lim, Chun-Hwan;Yang, Hong-Young
    • Journal of the Korean Institute of Telematics and Electronics T
    • /
    • v.36T no.3
    • /
    • pp.26-31
    • /
    • 1999
  • In this paper, we propose a facial characteristic detection algorithm based on knowledge and object segmentation method for image communication. In this algorithm, under the condition of the same lumination and distance from the fixed video camera to human face, we capture input images of 256 $\times$ 256 of gray scale 256 level and then remove the noise using the Gaussian filter. Two images are captured with a video camera, One contains the human face; the other contains only background region without including a face. And then we get a differential image between two images. After removing noise of the differential image by eroding End dilating, divide background image into a facial image. We separate eyes, ears, a nose and a mouth after searching the edge component in the facial image. From simulation results, we have verified the efficiency of the Proposed algorithm.

  • PDF