• Title/Summary/Keyword: 3D-face tracking

Search Result 48, Processing Time 0.031 seconds

Clinical Convergence Study on Attention Processing of Individuals with Social Anxiety Tendency : Focusing on Positive Stimulation in Emotional Context (사회불안성향자의 주의 과정에 관한 임상 융합 연구 : 정서맥락에서 긍정 자극을 중심으로)

  • Park, Ji-Yoon;Yoon, Hyae-Young
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.3
    • /
    • pp.79-90
    • /
    • 2018
  • The purpose of this study was to investigate the difference of individuals with social anxiety tendency and normal people according to existence of emotional context in attention processing for positive facial stimulation. To do this, we investigated attentional processing for positive face stimuli in a condition without/with emotional context. SADS and CES-D were administered to 800 undergraduate students in D city and the social anxiety group (SA, n=24) and the normal control group (NC, n=24) were selected. In order to measure the two factors of attention process (attention engagement and attention disengagement), first gaze direction and first gaze time were measured through eye-movement tracking. The results show that the SA group exhibited faster attention disengagement from positive face stimuli compared to the NC group in the condition without context. But, when the positive context presented with positive face stimuli, there is no difference between SA and NC. This result suggests that the positive background affects emotional processing of social anxiety disorder.

3D Visualization using Face Position and Direction Tracking (얼굴 위치와 방향 추적을 이용한 3차원 시각화)

  • Kim, Min-Ha;Kim, Ji-Hyun;Kim, Cheol-Ki;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.10a
    • /
    • pp.173-175
    • /
    • 2011
  • In this paper, we present an user interface which can show some 3D objects at various angles using tracked 3d head position and orientation. In implemented user interface, First, when user's head moves left/right (X-Axis) and up/down(Y-Axis), displayed objects are moved towards user's eyes using 3d head position. Second, when user's head rotate upon an X-Axis(pitch) or an Y-Axis(yaw), displayed objects are rotated by the same value as user's. The results of experiment from a variety of user's position and orientation show good accuracy and reactivity for 3d visualization.

  • PDF

New Digital Esthetic Rehabilitation Technique with Three-dimensional Augmented Reality: A Case Report

  • Hang-Nga, Mai;Du-Hyeong, Lee
    • Journal of Korean Dental Science
    • /
    • v.15 no.2
    • /
    • pp.166-171
    • /
    • 2022
  • This case report describes a dynamic digital esthetic rehabilitation procedure that integrates a new three-dimensional augmented reality (3D-AR) technique to treat a patient with multiple missing anterior teeth. The prostheses were designed using computer-aided design (CAD) software and virtually trialed using static and dynamic visualization methods. In the static method, the prostheses were visualized by integrating the CAD model with a 3D face scan of the patient. For the dynamic method, the 3D-AR application was used for real-time tracking and projection of the CAD prostheses in the patient's mouth. Results of a quick survey on patient satisfaction with the two visualization methods showed that the patient felt more satisfied with the dynamic visualization method because it allowed him to observe the prostheses directly on his face and be more proactive in the treatment process.

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Robust Viewpoint Estimation Algorithm for Moving Parallax Barrier Mobile 3D Display (이동형 패럴랙스 배리어 모바일 3D 디스플레이를 위한 강인한 시청자 시역 위치 추정 알고리즘)

  • Kim, Gi-Seok;Cho, Jae-Soo;Um, Gi-Mun
    • Journal of Broadcast Engineering
    • /
    • v.17 no.5
    • /
    • pp.817-826
    • /
    • 2012
  • This paper presents a robust viewpoint estimation algorithm for Moving Parallax Barrier mobile 3D display in sudden illumination changes. We analyze the previous viewpoint estimation algorithm that consists of the Viola-Jones face detector and the feature tracking by the Optical-Flow. The sudden changes in illumination decreases the performance of the Optical-flow feature tracker. In order to solve the problem, we define a novel performance measure for the Optical-Flow tracker. The overall performance can be increased by the selective adoption of the Viola-Jones detector and the Optical-flow tracker depending on the performance measure. Various experimental results show the effectiveness of the proposed method.

Design and Realization of Stereo Vision Module For 3D Facial Expression Tracking (3차원 얼굴 표정 추적을 위한 스테레오 시각 모듈 설계 및 구현)

  • Lee, Mun-Hee;Kim, Kyong-Sok
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.533-540
    • /
    • 2006
  • In this study we propose to use a facial motion capture technique to track facial motions and expressions effectively by using the stereo vision module, which has two CMOS IMAGE SENSORS. In the proposed tracking algorithm, a center point tracking technique and correlation tracking technique, based on neural networks, were used. Experimental results show that the two tracking techniques using stereo vision motion capture are able to track general face expressions at a 95.6% and 99.6% success rate, for 15 frames and 30 frames, respectively. However, the tracking success rates(82.7%,99.1%) of the center point tracking technique was far higher than those(78.7%,92.7%) of the correlation tracking technique, when lips trembled.

Target Object Image Extraction from 3D Space using Stereo Cameras

  • Yoo, Chae-Gon;Jung, Chang-Sung;Hwang, Chi-Jung
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1678-1680
    • /
    • 2002
  • Stereo matching technique is used in many practical fields like satellite image analysis and computer vision. In this paper, we suggest a method to extract a target object image from a complicated background. For example, human face image can be extracted from random background. This method can be applied to computer vision such as security system, dressing simulation by use of extracted human face, 3D modeling, and security system. Many researches about stereo matching have been performed. Conventional approaches can be categorized into area-based and feature-based method. In this paper, we start from area-based method and apply area tracking using scanning window. Coarse depth information is used for area merging process using area searching data. Finally, we produce a target object image.

  • PDF

Realtime Facial Expression Representation Method For Virtual Online Meetings System

  • Zhu, Yinge;Yerkovich, Bruno Carvacho;Zhang, Xingjie;Park, Jong-il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.212-214
    • /
    • 2021
  • In a society with Covid-19 as part of our daily lives, we had to adapt ourselves to a new reality to maintain our lifestyles as normal as possible. An example of this is teleworking and online classes. However, several issues appeared on the go as we started the new way of living. One of them is the doubt of knowing if real people are in front of the camera or if someone is paying attention during a lecture. Therefore, we encountered this issue by creating a 3D reconstruction tool to identify human faces and expressions actively. We use a web camera, a lightweight 3D face model, and use the 2D facial landmark to fit expression coefficients to drive the 3D model. With this Model, it is possible to represent our faces with an Avatar and fully control its bones with rotation and translation parameters. Therefore, in order to reconstruct facial expressions during online meetings, we proposed the above methods as our solution to solve the main issue.

  • PDF

Implementation of Intelligent Moving Target Tracking and Surveillance System Using Pan/Tilt-embedded Stereo Camera System (팬/틸트 탑제형 스테레오 카메라를 이용한 지능형 이동표적 추적 및 감시 시스템의 구현)

  • 고정환;이준호;김은수
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.514-523
    • /
    • 2004
  • In this paper, a new intelligent moving target tracking and surveillance system basing on the pan/tilt-embedded stereo camera system is suggested and implemented. In the proposed system, once the face area of a target is detected from the input stereo image by using a YCbCr color model and then, using this data as well as the geometric information of the tracking system, the distance and 3D information of the target are effectively extracted in real-time. Basing on these extracted data the pan/tilted-embedded stereo camera system is adaptively controlled and as a result, the proposed system can track the target adaptively under the various circumstance of the target. From some experiments using 80 frames of the test input stereo image, it is analyzed that standard deviation of the position displacement of the target in the horizontal and vertical directions after tracking is kept to be very low value of 1.82, 1.11, and error ratio between the measured and computed 3D coordinate values of the target is also kept to be very low value of 0.5% on average. From these good experimental results a possibility of implementing a new real-time intelligent stereo target tracking and surveillance system using the proposed scheme is finally suggested.

Stereo Camera-based Target Surveillance-Tracking System through an adaptive Pan/tilt Control (적응적인 스테레오 카메라 기반의 팬/틸트 제어를 통한 표적 감시-추적 시스템)

  • Cho, Do-Hyeoun;Ko, Jung-Hwan;Won, Young-Jin
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.1269-1272
    • /
    • 2005
  • In this paper, a new intelligent moving target tracking and surveillance system basing on the pan/tilt-embedded stereo camera system is suggested and implemented. In the proposed system, once the face area of a target is detected from the input stereo image by using a YCbCr color model and then, using this data as well as the geometric information of the tracking system, the distance and 3D information of the target are effectively extracted in real-time.

  • PDF