• Title/Summary/Keyword: facial motion tracking

Search Result 27, Processing Time 0.027 seconds

Integrated Approach of Multiple Face Detection for Video Surveillance

  • Kim, Tae-Kyun;Lee, Sung-Uk;Lee, Jong-Ha;Kee, Seok-Cheol;Kim, Sang-Ryong
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1960-1963
    • /
    • 2003
  • For applications such as video surveillance and human computer interface, we propose an efficiently integrated method to detect and track faces. Various visual cues are combined to the algorithm: motion, skin color, global appearance and facial pattern detection. The ICA (Independent Component Analysis)-SVM (Support Vector Machine based pattern detection is performed on the candidate region extracted by motion, color and global appearance information. Simultaneous execution of detection and short-term tracking also increases the rate and accuracy of detection. Experimental results show that our detection rate is 91% with very few false alarms running at about 4 frames per second for 640 by 480 pixel images on a Pentium IV 1㎓.

  • PDF

Robust AAM-based Face Tracking with Occlusion Using SIFT Features (SIFT 특징을 이용하여 중첩상황에 강인한 AAM 기반 얼굴 추적)

  • Eom, Sung-Eun;Jang, Jun-Su
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.355-362
    • /
    • 2010
  • Face tracking is to estimate the motion of a non-rigid face together with a rigid head in 3D, and plays important roles in higher levels such as face/facial expression/emotion recognition. In this paper, we propose an AAM-based face tracking algorithm. AAM has been widely used to segment and track deformable objects, but there are still many difficulties. Particularly, it often tends to diverge or converge into local minima when a target object is self-occluded, partially or completely occluded. To address this problem, we utilize the scale invariant feature transform (SIFT). SIFT is an effective method for self and partial occlusion because it is able to find correspondence between feature points under partial loss. And it enables an AAM to continue to track without re-initialization in complete occlusions thanks to the good performance of global matching. We also register and use the SIFT features extracted from multi-view face images during tracking to effectively track a face across large pose changes. Our proposed algorithm is validated by comparing other algorithms under the above 3 kinds of occlusions.

3D Printed customized sports mouthguard (3D 프린터로 제작하는 마우스가드)

  • Ryu, Jae Jun;Lee, Soo Young
    • The Journal of the Korean dental association
    • /
    • v.58 no.11
    • /
    • pp.700-712
    • /
    • 2020
  • The conventional mouthguard fabrication process consists of elastomeric impression taking and followed gypsum model making is now into intraoral scanning and direct mouthguard 3D printing with an additive manufacturing process. Also, dental professionals can get various diagnostic data collection such as facial scans, cone-beam CT, jaw motion tracking, and intraoral scan data to superimpose them for making virtual patient datasets. To print mouthguards, dental CAD software allows dental professionals to design mouthguards with ease. This article shows how to make 3D printed mouthguard step by step.

  • PDF

Intelligent Countenance Robot, Humanoid ICHR (지능형 표정로봇, 휴머노이드 ICHR)

  • Byun, Sang-Zoon
    • Proceedings of the KIEE Conference
    • /
    • 2006.10b
    • /
    • pp.175-180
    • /
    • 2006
  • In this paper, we develope a type of humanoid robot which can express its emotion against human actions. To interact with human, the developed robot has several abilities to express its emotion, which are verbal communication with human through voice/image recognition, motion tracking, and facial expression using fourteen Servo Motors. The proposed humanoid robot system consists of a control board designed with AVR90S8535 to control servor motors, a framework equipped with fourteen server motors and two CCD cameras, a personal computer to monitor its operations. The results of this research illustrate that our intelligent emotional humanoid robot is very intuitive and friendly so human can interact with the robot very easily.

  • PDF

Eye Tracking Using Neural Network and Mean-shift (신경망과 Mean-shift를 이용한 눈 추적)

  • Kang, Sin-Kuk;Kim, Kyung-Tai;Shin, Yun-Hee;Kim, Na-Yeon;Kim, Eun-Yi
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.1
    • /
    • pp.56-63
    • /
    • 2007
  • In this paper, an eye tracking method is presented using a neural network (NN) and mean-shift algorithm that can accurately detect and track user's eyes under the cluttered background. In the proposed method, to deal with the rigid head motion, the facial region is first obtained using skin-color model and con-nected-component analysis. Thereafter the eye regions are localized using neural network (NN)-based tex-ture classifier that discriminates the facial region into eye class and non-eye class, which enables our method to accurately detect users' eyes even if they put on glasses. Once the eye region is localized, they are continuously and correctly tracking by mean-shift algorithm. To assess the validity of the proposed method, it is applied to the interface system using eye movement and is tested with a group of 25 users through playing a 'aligns games.' The results show that the system process more than 30 frames/sec on PC for the $320{\times}240$ size input image and supply a user-friendly and convenient access to a computer in real-time operation.

Design and implement of the Educational Humanoid Robot D2 for Emotional Interaction System (감성 상호작용을 갖는 교육용 휴머노이드 로봇 D2 개발)

  • Kim, Do-Woo;Chung, Ki-Chull;Park, Won-Sung
    • Proceedings of the KIEE Conference
    • /
    • 2007.07a
    • /
    • pp.1777-1778
    • /
    • 2007
  • In this paper, We design and implement a humanoid robot, With Educational purpose, which can collaborate and communicate with human. We present an affective human-robot communication system for a humanoid robot, D2, which we designed to communicate with a human through dialogue. D2 communicates with humans by understanding and expressing emotion using facial expressions, voice, gestures and posture. Interaction between a human and a robot is made possible through our affective communication framework. The framework enables a robot to catch the emotional status of the user and to respond appropriately. As a result, the robot can engage in a natural dialogue with a human. According to the aim to be interacted with a human for voice, gestures and posture, the developed Educational humanoid robot consists of upper body, two arms, wheeled mobile platform and control hardware including vision and speech capability and various control boards such as motion control boards, signal processing board proceeding several types of sensors. Using the Educational humanoid robot D2, we have presented the successful demonstrations which consist of manipulation task with two arms, tracking objects using the vision system, and communication with human by the emotional interface, the synthesized speeches, and the recognition of speech commands.

  • PDF

Construction of Virtual Public Speaking Simulator for Treatment of Social Phobia (대인공포증의 치료를 위한 가상 연설 시뮬레이터의 실험적 제작)

  • 구정훈;장동표;신민보;조항준;안희범;조백환;김인영;김선일
    • Journal of Biomedical Engineering Research
    • /
    • v.21 no.6
    • /
    • pp.615-621
    • /
    • 2000
  • A social phobia is an anxiety disorder characterized by extreme fear and phobic avoidance of social and performance situations. Medications or cognitive-behavior methods have been mainly used in treating it. These methods have some shortcomings such as being inefficient and difficult to apply to treatment. Lately the virtual rcality technology has been applied to dcal with the anxiety disorders in order to compcnsate for these defects. A virtual environment provides a patient with stimuli which cvokes a phobia. and the patient's exposure to the virtual phobic situation make him be able to overcome it. In this study, we suggested the public speaking simulator based on a personal computer for the treatment of social phobia. The public speaking simulator was composed of a position sensor. head mount display and audio system. And a virtual environment for the treatment was suggested to be a seminar room where 8 avatars are sitting. The virtual environment includes a tracking system the trace a participant's head-movement using a HMD with position sensor and 3D sound is added to the virtual environment so that he might fcel it realistic. We also made avatars' motion and facial expression change in reaction to a participant's speech. The goal of developing public speaking simulator is to apply to treat fear of public speaking efficiently and economically. In a future study. we should get more information about immergence and treatment efficiency by clinical test and apply it to this simulator.

  • PDF