• Title/Summary/Keyword: Facial Expression Space

Search Result 48, Processing Time 0.024 seconds

Realtime Facial Expression Control of 3D Avatar by Isomap of Motion Data (모션 데이터에 Isomap을 사용한 3차원 아바타의 실시간 표정 제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.3
    • /
    • pp.9-16
    • /
    • 2007
  • This paper describe methodology that is distributed on 2-dimensional plane to much high-dimensional facial motion datas using Isomap algorithm, and user interface techniques to control facial expressions by selecting expressions while user navigates this space in real-time. Isomap algorithm is processed of three steps as follow; first define an adjacency expression of each expression data, and second, calculate manifold distance between each expressions and composing expression spaces. These facial spaces are created by calculating of the shortest distance(manifold distance) between two random expressions. We have taken a Floyd algorithm for it. Third, materialize multi-dimensional expression spaces using Multidimensional Scaling, and project two dimensions plane. The smallest adjacency distance to define adjacency expressions uses Pearson Correlation Coefficient. Users can control facial expressions of 3-dimensional avatar by using user interface while they navigates two dimension spaces by real-time.

The Emotional Boundary Decision in a Linear Affect-Expression Space for Effective Robot Behavior Generation (효과적인 로봇 행동 생성을 위한 선형의 정서-표정 공간 내 감정 경계의 결정 -비선형의 제스처 동기화를 위한 정서, 표정 공간의 영역 결정)

  • Jo, Su-Hun;Lee, Hui-Sung;Park, Jeong-Woo;Kim, Min-Gyu;Chung, Myung-Jin
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.540-546
    • /
    • 2008
  • In the near future, robots should be able to understand human's emotional states and exhibit appropriate behaviors accordingly. In Human-Human Interaction, the 93% consist of the speaker's nonverbal communicative behavior. Bodily movements provide information of the quantity of emotion. Latest personal robots can interact with human using multi-modality such as facial expression, gesture, LED, sound, sensors and so on. However, a posture needs a position and an orientation only and in facial expression or gesture, movements are involved. Verbal, vocal, musical, color expressions need time information. Because synchronization among multi-modalities is a key problem, emotion expression needs a systematic approach. On the other hand, at low intensity of surprise, the face could be expressed but the gesture could not be expressed because a gesture is not linear. It is need to decide the emotional boundaries for effective robot behavior generation and synchronization with another expressible method. If it is so, how can we define emotional boundaries? And how can multi-modality be synchronized each other?

  • PDF

Nonverbal Expressions in New Media Art -Case Studies about Facial Expressions and Sound (뉴미디어 아트에 나타난 비언어적 표현 -표정과 소리의 사례연구를 중심으로)

  • Yoo, Mi;An, KyoungHee
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.10
    • /
    • pp.146-156
    • /
    • 2019
  • New media art moves out of place and time constraints, sublimates the benefits of technology into art, and presents a new way of communication with the audience. This paper analyses the tendency of nonverbal communication methods by analysing examples of facial expressions and sound used in new media art from early times. As a result, it can be seen that the digital paradigm in the new media art has a nonlinear thinking, which makes a perceptual reduction of immersion and dispersion. The facial expression in new media art made it possible not only to overcome the limit of space and time of various expressions through 'visual distortions, enlargement, and virtualisation', but also to enable new ways of communication to display facial parts combined or separated in the digital environment. The sound in new media art does not stay in auditory sense, but pursues multi-sensory and synesthesia by cooperating with visual and tactile, evolves by revealing characteristics of space expansion and sensibility and interaction of audience.

Hybrid-Feature Extraction for the Facial Emotion Recognition

  • Byun, Kwang-Sub;Park, Chang-Hyun;Sim, Kwee-Bo;Jeong, In-Cheol;Ham, Ho-Sang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1281-1285
    • /
    • 2004
  • There are numerous emotions in the human world. Human expresses and recognizes their emotion using various channels. The example is an eye, nose and mouse. Particularly, in the emotion recognition from facial expression they can perform the very flexible and robust emotion recognition because of utilization of various channels. Hybrid-feature extraction algorithm is based on this human process. It uses the geometrical feature extraction and the color distributed histogram. And then, through the independently parallel learning of the neural-network, input emotion is classified. Also, for the natural classification of the emotion, advancing two-dimensional emotion space is introduced and used in this paper. Advancing twodimensional emotion space performs a flexible and smooth classification of emotion.

  • PDF

A Generation Methodology of Facial Expressions for Avatar Communications (아바타 통신에서의 얼굴 표정의 생성 방법)

  • Kim Jin-Yong;Yoo Jae-Hwi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.3 s.35
    • /
    • pp.55-64
    • /
    • 2005
  • The avatar can be used as an auxiliary methodology of text and image communications in cyber space. An intelligent communication method can also be utilized to achieve real-time communication, where intelligently coded data (joint angles for arm gestures and action units for facial emotions) are transmitted instead of real or compressed pictures. In this paper. for supporting the action of arm and leg gestures, a method of generating the facial expressions that can represent sender's emotions is provided. The facial expression can be represented by Action Unit(AU), in this paper we suggest the methodology of finding appropriate AUs in avatar models that have various shape and structure. And, to maximize the efficiency of emotional expressions, a comic-style facial model having only eyebrows, eyes, nose, and mouth is employed. Then generation of facial emotion animation with the parameters is also investigated.

  • PDF

A Study on Improvement of Face Recognition Rate with Transformation of Various Facial Poses and Expressions (얼굴의 다양한 포즈 및 표정의 변환에 따른 얼굴 인식률 향상에 관한 연구)

  • Choi Jae-Young;Whangbo Taeg-Keun;Kim Nak-Bin
    • Journal of Internet Computing and Services
    • /
    • v.5 no.6
    • /
    • pp.79-91
    • /
    • 2004
  • Various facial pose detection and recognition has been a difficult problem. The problem is due to the fact that the distribution of various poses in a feature space is mere dispersed and more complicated than that of frontal faces, This thesis proposes a robust pose-expression-invariant face recognition method in order to overcome insufficiency of the existing face recognition system. First, we apply the TSL color model for detecting facial region and estimate the direction of face using facial features. The estimated pose vector is decomposed into X-V-Z axes, Second, the input face is mapped by deformable template using this vectors and 3D CANDIDE face model. Final. the mapped face is transformed to frontal face which appropriates for face recognition by the estimated pose vector. Through the experiments, we come to validate the application of face detection model and the method for estimating facial poses, Moreover, the tests show that recognition rate is greatly boosted through the normalization of the poses and expressions.

  • PDF

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • Kim, Tae-Woo;Kang, Yong-Seok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.53-60
    • /
    • 2009
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking; and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF

Face Image Synthesis using Nonlinear Manifold Learning (비선형 매니폴드 학습을 이용한 얼굴 이미지 합성)

  • 조은옥;김대진;방승양
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.2
    • /
    • pp.182-188
    • /
    • 2004
  • This paper proposes to synthesize facial images from a few parameters for the pose and the expression of their constituent components. This parameterization makes the representation, storage, and transmission of face images effective. But it is difficult to parameterize facial images because variations of face images show a complicated nonlinear manifold in high-dimensional data space. To tackle this problem, we use an LLE (Locally Linear Embedding) technique for a good representation of face images, where the relationship among face images is preserving well and the projected manifold into the reduced feature space becomes smoother and more continuous. Next, we apply a snake model to estimate face feature values in the reduced feature space that corresponds to a specific pose and/or expression parameter. Finally, a synthetic face image is obtained from an interpolation of several neighboring face images in the vicinity of the estimated feature value. Experimental results show that the proposed method shows a negligible overlapping effect and creates an accurate and consistent synthetic face images with respect to changes of pose and/or expression parameters.

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • 박호식;정연숙;손동주;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.603-607
    • /
    • 2004
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF

Development of an Emotion Recognition Robot using a Vision Method (비전 방식을 이용한 감정인식 로봇 개발)

  • Shin, Young-Geun;Park, Sang-Sung;Kim, Jung-Nyun;Seo, Kwang-Kyu;Jang, Dong-Sik
    • IE interfaces
    • /
    • v.19 no.3
    • /
    • pp.174-180
    • /
    • 2006
  • This paper deals with the robot system of recognizing human's expression from a detected human's face and then showing human's emotion. A face detection method is as follows. First, change RGB color space to CIElab color space. Second, extract skin candidate territory. Third, detect a face through facial geometrical interrelation by face filter. Then, the position of eyes, a nose and a mouth which are used as the preliminary data of expression, he uses eyebrows, eyes and a mouth. In this paper, the change of eyebrows and are sent to a robot through serial communication. Then the robot operates a motor that is installed and shows human's expression. Experimental results on 10 Persons show 78.15% accuracy.