• Title/Summary/Keyword: Face Tracking

Search Result 343, Processing Time 0.023 seconds

Development of a Cost-Effective Tele-Robot System Delivering Speaker's Affirmative and Negative Intentions (화자의 긍정·부정 의도를 전달하는 실용적 텔레프레즌스 로봇 시스템의 개발)

  • Jin, Yong-Kyu;You, Su-Jeong;Cho, Hye-Kyung
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.3
    • /
    • pp.171-177
    • /
    • 2015
  • A telerobot offers a more engaging and enjoyable interaction with people at a distance by communicating via audio, video, expressive gestures, body pose and proxemics. To provide its potential benefits at a reasonable cost, this paper presents a telepresence robot system for video communication which can deliver speaker's head motion through its display stanchion. Head gestures such as nodding and head-shaking can give crucial information during conversation. We also can assume a speaker's eye-gaze, which is known as one of the key non-verbal signals for interaction, from his/her head pose. In order to develop an efficient head tracking method, a 3D cylinder-like head model is employed and the Harris corner detector is combined with the Lucas-Kanade optical flow that is known to be suitable for extracting 3D motion information of the model. Especially, a skin color-based face detection algorithm is proposed to achieve robust performance upon variant directions while maintaining reasonable computational cost. The performance of the proposed head tracking algorithm is verified through the experiments using BU's standard data sets. A design of robot platform is also described as well as the design of supporting systems such as video transmission and robot control interfaces.

3D Facial Model Expression Creation with Head Motion (얼굴 움직임이 결합된 3차원 얼굴 모델의 표정 생성)

  • Kwon, Oh-Ryun;Chun, Jun-Chul;Min, Kyong-Pil
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.1012-1018
    • /
    • 2007
  • 본 논문에서는 비전 기반 3차원 얼굴 모델의 자동 표정 생성 시스템을 제안한다. 기존의 3차원 얼굴 애니메이션에 관한 연구는 얼굴의 움직임을 나타내는 모션 추정을 배제한 얼굴 표정 생성에 초점을 맞추고 있으며 얼굴 모션 추정과 표정 제어에 관한 연구는 독립적으로 이루어지고 있다. 제안하는 얼굴 모델의 표정 생성 시스템은 크게 얼굴 검출, 얼굴 모션 추정, 표정 제어로 구성되어 있다. 얼굴 검출 방법으로는 얼굴 후보 영역 검출과 얼굴 영역 검출 과정으로 구성된다. HT 컬러 모델을 이용하며 얼굴의 후보 영역을 검출하며 얼굴 후보 영역으로부터 PCA 변환과 템플릿 매칭을 통해 얼굴 영역을 검출하게 된다. 검출된 얼굴 영역으로부터 얼굴 모션 추정과 얼굴 표정 제어를 수행한다. 3차원 실린더 모델의 투영과 LK 알고리즘을 이용하여 얼굴의 모션을 추정하며 추정된 결과를 3차원 얼굴 모델에 적용한다. 또한 영상 보정을 통해 강인한 모션 추정을 할 수 있다. 얼굴 모델의 표정을 생성하기 위해 특징점 기반의 얼굴 모델 표정 생성 방법을 적용하며 12개의 얼굴 특징점으로부터 얼굴 모델의 표정을 생성한다. 얼굴의 구조적 정보와 템플릿 매칭을 이용하여 눈썹, 눈, 입 주위의 얼굴 특징점을 검출하며 LK 알고리즘을 이용하여 특징점을 추적(Tracking)한다. 추적된 특징점의 위치는 얼굴의 모션 정보와 표정 정보의 조합으로 이루어져있기 때문에 기하학적 변환을 이용하여 얼굴의 방향이 정면이었을 경우의 특징점의 변위인 애니메이션 매개변수를 획득한다. 애니메이션 매개변수로부터 얼굴 모델의 제어점을 이동시키며 주위의 정점들은 RBF 보간법을 통해 변형한다. 변형된 얼굴 모델로부터 얼굴 표정을 생성하며 모션 추정 결과를 모델에 적용함으로써 얼굴 모션 정보가 결합된 3차원 얼굴 모델의 표정을 생성한다.

  • PDF

A Study on the Visual Precautions of Soju Advertising Posters Using Eye Tracking (아이트래킹을 활용한 소주광고 포스터의 시각적 주의에 관한 연구)

  • Hwang, Mi Kyung;Kwon, Mahn Woo;Park, Min Hee;Kim, Chee Yong
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.2
    • /
    • pp.368-375
    • /
    • 2020
  • In this study, the area of interest(AOI) of Soju ad poster was tracked for analysis the time to frist fixation, the average of fixation duration and count by the study indexes. As a result of the analysis, Visual attention was higher the face than the body shape of the ad model. This means "when we look at printed ads, we see picture elements first, not language one" but language elements can't be overlooked either. Also, the importance of the model role could be verified by measuring the visual attention on the Soju ad poster. Based on the results of this study, if further research on ad posters is carried out and scientific and quantitative interpretation methods are presented, it can be used as product marketing data that can be reflected in ad model selection and poster design.

Facial Behavior Recognition for Driver's Fatigue Detection (운전자 피로 감지를 위한 얼굴 동작 인식)

  • Park, Ho-Sik;Bae, Cheol-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9C
    • /
    • pp.756-760
    • /
    • 2010
  • This paper is proposed to an novel facial behavior recognition system for driver's fatigue detection. Facial behavior is shown in various facial feature such as head expression, head pose, gaze, wrinkles. But it is very difficult to clearly discriminate a certain behavior by the obtained facial feature. Because, the behavior of a person is complicated and the face representing behavior is vague in providing enough information. The proposed system for facial behavior recognition first performs detection facial feature such as eye tracking, facial feature tracking, furrow detection, head orientation estimation, head motion detection and indicates the obtained feature by AU of FACS. On the basis of the obtained AU, it infers probability each state occur through Bayesian network.

3D Visualization using Face Position and Direction Tracking (얼굴 위치와 방향 추적을 이용한 3차원 시각화)

  • Kim, Min-Ha;Kim, Ji-Hyun;Kim, Cheol-Ki;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.10a
    • /
    • pp.173-175
    • /
    • 2011
  • In this paper, we present an user interface which can show some 3D objects at various angles using tracked 3d head position and orientation. In implemented user interface, First, when user's head moves left/right (X-Axis) and up/down(Y-Axis), displayed objects are moved towards user's eyes using 3d head position. Second, when user's head rotate upon an X-Axis(pitch) or an Y-Axis(yaw), displayed objects are rotated by the same value as user's. The results of experiment from a variety of user's position and orientation show good accuracy and reactivity for 3d visualization.

  • PDF

Understanding the Importance of Presenting Facial Expressions of an Avatar in Virtual Reality

  • Kim, Kyulee;Joh, Hwayeon;Kim, Yeojin;Park, Sohyeon;Oh, Uran
    • International journal of advanced smart convergence
    • /
    • v.11 no.4
    • /
    • pp.120-128
    • /
    • 2022
  • While online social interactions have been more prevalent with the increased popularity of Metaverse platforms, little has been studied the effects of facial expressions in virtual reality (VR), which is known to play a key role in social contexts. To understand the importance of presenting facial expressions of a virtual avatar under different contexts, we conducted a user study with 24 participants where they were asked to have a conversation and play a charades game with an avatar with and without facial expressions. The results show that participants tend to gaze at the face region for the majority of the time when having a conversation or trying to guess emotion-related keywords when playing charades regardless of the presence of facial expressions. Yet, we confirmed that participants prefer to see facial expressions in virtual reality as well as in real-world scenarios as it helps them to better understand the contexts and to have more immersive and focused experiences.

Tracking and Detection of Face Region in Long Distance Image (실시간 원거리 얼굴영역 검출 및 추적)

  • Park, Sung-Jin;Han, Sang-Il;Cha, Hyung-Tai
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2005.11a
    • /
    • pp.201-204
    • /
    • 2005
  • 동영상에서 얼굴을 인식하는 기술은 Eigen-Face를 이용하는 방법, 템플릿을 이용하는 방법 등과 같이 다양한 방법이 연구되어지고 있다. 하지만 이들 기법들이 모두 동영상에서 얼굴영역을 검출했을지는 모르지만 얼굴영역이 영상에서 차지하는 위치와 크기를 일정하게 제한하고 있다. 그 중에서 입력되는 영상이 촬영 도구로부터 제한된 거리에서 촬영되어 얻어 지거나 실험을 통해 얻어진 영상을 이용하여 얼굴영역을 검출한다. 하지만 실제 다양한 응용분야에서 얼굴영역 검출 기술을 이용하기 위해서는 이러한 제한된 입력 영상뿐만이 아니라 어떠한 환경에서의 입력 영상에서도 얼굴영역을 검출할 수 있어야 한다. 본 논문은 근거리뿐만이 아니라 원거리에서 획득한 영상에서도 얼굴영역을 검출할 수 있으며, 얼굴의 특징 추출과 예측기법을 통하여 보다 향상된 얼굴영역 검출을 할 수 있다. 움직임 정보와 얼굴색상정보를 이용하여 8x8블록을 만들고 이런 블록 정보들을 특정한 규칙에 적용함으로써 얼굴영역을 후보를 검출하게 된다. 그리고 후보 얼굴영역의 고유한 특징들을 추출하고 칼만 필터를 적용한 예측기법을 통하여 얼굴영역 판단하게 된다.

  • PDF

Emotion Recognition and Expression System of Robot Based on 2D Facial Image (2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템)

  • Lee, Dong-Hoon;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.4
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

PERSONAL SPACE-BASED MODELING OF RELATIONSHIPS BETWEEN PEOPLE FOR NEW HUMAN-COMPUTER INTERACTION

  • Amaoka, Toshitaka;Laga, Hamid;Saito, Suguru;Nakajima, Masayuki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.746-750
    • /
    • 2009
  • In this paper we focus on the Personal Space (PS) as a nonverbal communication concept to build a new Human Computer Interaction. The analysis of people positions with respect to their PS gives an idea on the nature of their relationship. We propose to analyze and model the PS using Computer Vision (CV), and visualize it using Computer Graphics. For this purpose, we define the PS based on four parameters: distance between people, their face orientations, age, and gender. We automatically estimate the first two parameters from image sequences using CV technology, while the two other parameters are set manually. Finally, we calculate the two-dimensional relationship of multiple persons and visualize it as 3D contours in real-time. Our method can sense and visualize invisible and unconscious PS distributions and convey the spatial relationship of users by an intuitive visual representation. The results of this paper can be used to Human Computer Interaction in public spaces.

  • PDF

A New Height Estimation Scheme Using Geometric Information of Stereo Camera based on Pan/tilt control (팬/틸트 제어기반의 스데레오 카메라의 기하학적 정보를 이용한 새로운 높이 추정기법)

  • Ko Jung-Hwan;Kim Eun-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.2C
    • /
    • pp.156-165
    • /
    • 2006
  • In this paper, a new intelligent moving target tracking and surveillance system basing on the pan/tilt-embedded stereo camera system is suggested and implemented. In the proposed system, once the face area of a target is detected from the input stereo image by using a YCbCr color model and phase-type correlation scheme and then, using this data as well as the geometric information of the tracking system, the distance and 3D information of the target are effectively extracted in real-time. Basing on these extracted data the pan/tilted-imbedded stereo camera system is adaptively controlled and as a result, the proposed system can track the target adaptively under the various circumstance of the target. From some experiments using 480 frames of the test input stereo image, it is analyzed that a standard variation between the measured and computed the estimated target's height and an error ratio between the measured and computed 3D coordinate values of the target is also kept to be very low value of 1.03 and 1.18$\%$ on average, respectively. From these good experimental results a possibility of implementing a new real-time intelligent stereo target tracking and surveillance system using the proposed scheme is finally suggested.