• Title/Summary/Keyword: Facial capture

Search Result 64, Processing Time 0.028 seconds

Hierarchical Visualization of the Space of Facial Expressions (얼굴 표정공간의 계층적 가시화)

  • Kim Sung-Ho;Jung Moon-Ryul
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.12
    • /
    • pp.726-734
    • /
    • 2004
  • This paper presents a facial animation method that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically Our system creates the facial expression space from about 2400 captured facial frames. To represent the state of each expression, we use the distance matrix that represents the distance between pairs of feature points on the face. The shortest trajectories are found by dynamic programming. The space of facial expressions is multidimensional. To navigate this space, we visualize the space of expressions in 2D space by using the multidimensional scaling(MDS). But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 10 clusters from the space of 2400 facial expressions. Every tine the level increases, the system doubles the number of clusters. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. The user selects new key frames along the navigation path of the previous level. At the maximum level, the user completes key frame specification. We let animators use the system to create example animations, and evaluate the system based on the results.

Facial Expression Animation which Applies a Motion Data in the Vector based Caricature (벡터 기반 캐리커처에 모션 데이터를 적용한 얼굴 표정 애니메이션)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.90-98
    • /
    • 2010
  • This paper describes methodology which enables user in order to generate facial expression animation of caricature which applies a facial motion data in the vector based caricature. This method which sees was embodied with the plug-in of illustrator. And It is equipping the user interface of separate way. The data which is used in experiment attaches 28 small-sized markers in important muscular part of the actor face and captured the multiple many expression which is various with Facial Tracker. The caricature was produced in the bezier curve form which has a respectively control point from location of the important marker which attaches in the face of the actor when motion capturing to connection with motion data and the region which is identical. The facial motion data compares in the caricature and the spatial scale went through a motion calibration process too because of size. And with the user letting the control did possibly at any time. In order connecting the caricature and the markers also, we did possibly with the click the corresponding region of the caricature, after the user selects each name of the face region from the menu. Finally, this paper used a user interface of illustrator and in order for the caricature facial expression animation generation which applies a facial motion data in the vector based caricature to be possible.

Creating and Utilization of Virtual Human via Facial Capturing based on Photogrammetry (포토그래메트리 기반 페이셜 캡처를 통한 버추얼 휴먼 제작 및 활용)

  • Ji Yun;Haitao Jiang;Zhou Jiani;Sunghoon Cho;Tae Soo Yun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.25 no.2
    • /
    • pp.113-118
    • /
    • 2024
  • Recently, advancements in artificial intelligence and computer graphics technology have led to the emergence of various virtual humans across multiple media such as movies, advertisements, broadcasts, games, and social networking services (SNS). In particular, in the advertising marketing sector centered around virtual influencers, virtual humans have already proven to be an important promotional tool for businesses in terms of time and cost efficiency. In Korea, the virtual influencer market is in its nascent stage, and both large corporations and startups are preparing to launch new services related to virtual influencers without clear boundaries. However, due to the lack of public disclosure of the development process, they face the situation of having to incur significant expenses. To address these requirements and challenges faced by businesses, this paper implements a photogrammetry-based facial capture system for creating realistic virtual humans and explores the use of these models and their application cases. The paper also examines an optimal workflow in terms of cost and quality through MetaHuman modeling based on Unreal Engine, which simplifies the complex CG work steps from facial capture to the actual animation process. Additionally, the paper introduces cases where virtual humans have been utilized in SNS marketing, such as on Instagram, and demonstrates the performance of the proposed workflow by comparing it with traditional CG work through an Unreal Engine-based workflow.

Where to spot: individual identification of leopard cats (Prionailurus bengalensis euptilurus) in South Korea

  • Park, Heebok;Lim, Anya;Choi, Tae-Young;Baek, Seung-Yoon;Song, Eui-Geun;Park, Yung Chul
    • Journal of Ecology and Environment
    • /
    • v.43 no.4
    • /
    • pp.385-389
    • /
    • 2019
  • Knowledge of abundance, or population size, is fundamental in wildlife conservation and management. Camera-trapping, in combination with capture-recapture methods, has been extensively applied to estimate abundance and density of individually identifiable animals due to the advantages of being non-invasive, effective to survey wide-ranging, elusive, or nocturnal species, operating in inhospitable environment, and taking low labor. We assessed the possibility of using coat patterns from images to identify an individual leopard cat (Prionailurus bengalensis), a Class II endangered species in South Korea. We analyzed leopard cat images taken from Digital Single-Lense Relfex camera (high resolution, 18Mpxl) and camera traps (low resolution, 3.1Mpxl) using HotSpotter, an image matching algorithm. HotSpotter accurately top-ranked an image of the same individual leopard cat with the reference leopard cat image 100% by matching facial and ventral parts. This confirms that facial and ventral fur patterns of the Amur leopard cat are good matching points to be used reliably to identify an individual. We anticipate that the study results will be useful to researchers interested in studying behavior or population parameter estimates of Amur leopard cats based on capture-recapture models.

Object Segmentation for Image Transmission Services and Facial Characteristic Detection based on Knowledge (화상전송 서비스를 위한 객체 분할 및 지식 기반 얼굴 특징 검출)

  • Lim, Chun-Hwan;Yang, Hong-Young
    • Journal of the Korean Institute of Telematics and Electronics T
    • /
    • v.36T no.3
    • /
    • pp.26-31
    • /
    • 1999
  • In this paper, we propose a facial characteristic detection algorithm based on knowledge and object segmentation method for image communication. In this algorithm, under the condition of the same lumination and distance from the fixed video camera to human face, we capture input images of 256 $\times$ 256 of gray scale 256 level and then remove the noise using the Gaussian filter. Two images are captured with a video camera, One contains the human face; the other contains only background region without including a face. And then we get a differential image between two images. After removing noise of the differential image by eroding End dilating, divide background image into a facial image. We separate eyes, ears, a nose and a mouth after searching the edge component in the facial image. From simulation results, we have verified the efficiency of the Proposed algorithm.

  • PDF

Sign2Gloss2Text-based Sign Language Translation with Enhanced Spatial-temporal Information Centered on Sign Language Movement Keypoints (수어 동작 키포인트 중심의 시공간적 정보를 강화한 Sign2Gloss2Text 기반의 수어 번역)

  • Kim, Minchae;Kim, Jungeun;Kim, Ha Young
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.10
    • /
    • pp.1535-1545
    • /
    • 2022
  • Sign language has completely different meaning depending on the direction of the hand or the change of facial expression even with the same gesture. In this respect, it is crucial to capture the spatial-temporal structure information of each movement. However, sign language translation studies based on Sign2Gloss2Text only convey comprehensive spatial-temporal information about the entire sign language movement. Consequently, detailed information (facial expression, gestures, and etc.) of each movement that is important for sign language translation is not emphasized. Accordingly, in this paper, we propose Spatial-temporal Keypoints Centered Sign2Gloss2Text Translation, named STKC-Sign2 Gloss2Text, to supplement the sequential and semantic information of keypoints which are the core of recognizing and translating sign language. STKC-Sign2Gloss2Text consists of two steps, Spatial Keypoints Embedding, which extracts 121 major keypoints from each image, and Temporal Keypoints Embedding, which emphasizes sequential information using Bi-GRU for extracted keypoints of sign language. The proposed model outperformed all Bilingual Evaluation Understudy(BLEU) scores in Development(DEV) and Testing(TEST) than Sign2Gloss2Text as the baseline, and in particular, it proved the effectiveness of the proposed methodology by achieving 23.19, an improvement of 1.87 based on TEST BLEU-4.

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Facial Expression Control of 3D Avatar using Motion Data (모션 데이터를 이용한 3차원 아바타 얼굴 표정 제어)

  • Kim Sung-Ho;Jung Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.5
    • /
    • pp.383-390
    • /
    • 2004
  • This paper propose a method that controls facial expression of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. And we setup its system. The space of expression is created from about 2400 frames consist of motion captured data of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. But this space is not such a space where one state can go to another state via the straight trajectory between them. We derive trajectories between two states from the captured set of expressions in an approximate manner. First, two states are regarded adjacent if the distance between their distance matrices is below a given threshold. Any two states are considered to have a trajectory between them If there is a sequence of adjacent states between them. It is assumed . that one states goes to another state via the shortest trajectory between them. The shortest trajectories are found by dynamic programming. The space of facial expressions, as the set of distance matrices, is multidimensional. Facial expression of 3D avatar Is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the multidimensional scaling(MDS). To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. As a result of that, users estimate that system is very useful to control facial expression of 3D avatar in real-time.

Analysis of facial expressions using three-dimensional motion capture (3차원동작측정에 의한 얼굴 표정의 분석)

  • 박재희;이경태;김봉옥;조강희
    • Proceedings of the ESK Conference
    • /
    • 1996.10a
    • /
    • pp.59-65
    • /
    • 1996
  • 인간의 얼굴 표정은 인간의 감성이 가장 잘 나타나는 부분이다 . 따라서 전통적으로 인간의 표정을 감 성과 연관 지어 연구하려는 많은 노력이 있어 왔다. 최근에는 얼굴 온도 변화를 측정하는 방법, 근전도(EMG; Electromyography)로 얼굴 근육의 움직임을 측정하는 방법, 이미지나 동작분석에 의한 얼굴 표정의 연구가 가능 하게 되었다. 본 연구에서는 인간의 얼굴 표정 변화를 3차원 동작분석 장비를 이용하여 측정하였다. 얼굴 표정 의 측정을 위해 두가지의 실험을 계획하였는데, 첫번 째 실험에서는 피실험자들로 하여금 웃는 표정, 놀라는 표정, 화난 표정, 그리고 무표정 등을 짓게 한 후 이를 측정하였으며, 두번째 실험에스는 코미디 영화와 공포 영화를 피 실험자들에게 보여 주어 피실험자들의 표정 변화를 측정하였다. 5명의 성인 남자가 실험에 참여하였는데, 감성을 일으킬 수 있는 적절한 자극 제시를 못한 점 등에서 처음에 기도했던 6개의 기본 표정(웃음, 슬픔, 혐오, 공포, 화남, 놀람)에 대한 모든 실험과 분석이 수행되지 못했다. 나머지 부분을 포함한 정교한 실험 준비가 추후 연구 에서 요구된다. 이러한 연구는 앞으로 감성공학, 소비자 반응 측정, 컴퓨터 애니메이션(animation), 정보 표시 장치(display) 수단으로서 사용될 수 있을 것이다.

  • PDF

Fast Zooming and Focusing Technique for Implementing a Real-time Surveillance Camera System (실시간 감시 카메라를 구현하기 위한 고속 영상확대 및 초점조절 기법)

  • 한헌수;최정렬
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.3
    • /
    • pp.74-82
    • /
    • 2004
  • This paper proposes a fast zooming and focusing technique for implementing a real-time surveillance camera system which can capture a face image in less than 1 second. It determines the positions of zooming and focusing lenses using two-step algorithm. In the first step, it moves the zooming and focusing lenses simultaneously to the positions calculated using the lens equations for achieving the predetermined magnification. In the second step the focusing lens is adjusted so that it is positioned at the place where the focus measure is the maximum. The camera system implemented for the experiments has shown that the proposed algorithm spends about 0.56 second on average fur obtaining a focused image.