• 제목/요약/키워드: 얼굴표정

Search Result 518, Processing Time 0.032 seconds

Measurement Method on Aesthetic Experience of Game Player (게임플레이어의 미적경험 데이터 측정방법)

  • Choi, Gyu-Hyeok;Kim, Mi-Jin
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.5
    • /
    • pp.207-215
    • /
    • 2020
  • Studies on aesthetic experience in games mostly focus on engineering approach centered on structural determination of specific target within the game as well as humanistic·social approach in forms of artistic conversation on game play experience. This paper have established a theoretic guideline regarding the progress of aesthetic experience which allows analysis from the perspective of player experience acquired during game play. Based on such guideline, the study classifies cognitive data (Eye-Tracking, Playing Action, Facial Expression) on aesthetic experience and suggests the methods to measure such data. By deducing errors and points to be considered related to measuring methods through pilot tests, this study will contribute to the execution of empirical study focused on player perspective.

Face Recognition Using Local Statistics of Gradients and Correlations (그래디언트와 상관관계의 국부통계를 이용한 얼굴 인식)

  • Ju, Yingai;So, Hyun-Joo;Kim, Nam-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.19-29
    • /
    • 2011
  • Until now, many face recognition methods have been proposed, most of them use a 1-dimensional feature vector which is vectorized the input image without feature extraction process or input image itself is used as a feature matrix. It is known that the face recognition methods using raw image yield deteriorated performance in databases whose have severe illumination changes. In this paper, we propose a face recognition method using local statistics of gradients and correlations which are good for illumination changes. BDIP (block difference of inverse probabilities) is chosen as a local statistics of gradients and two types of BVLC (block variation of local correlation coefficients) is chosen as local statistics of correlations. When a input image enters the system, it extracts the BDIP, BVLC1 and BVLC2 feature images, fuses them, obtaining feature matrix by $(2D)^2$ PCA transformation, and classifies it with training feature matrix by nearest classifier. From experiment results of four face databases, FERET, Weizmann, Yale B, Yale, we can see that the proposed method is more reliable than other six methods in lighting and facial expression.

The Intelligent Determination Model of Audience Emotion for Implementing Personalized Exhibition (개인화 전시 서비스 구현을 위한 지능형 관객 감정 판단 모형)

  • Jung, Min-Kyu;Kim, Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.39-57
    • /
    • 2012
  • Recently, due to the introduction of high-tech equipment in interactive exhibits, many people's attention has been concentrated on Interactive exhibits that can double the exhibition effect through the interaction with the audience. In addition, it is also possible to measure a variety of audience reaction in the interactive exhibition. Among various audience reactions, this research uses the change of the facial features that can be collected in an interactive exhibition space. This research develops an artificial neural network-based prediction model to predict the response of the audience by measuring the change of the facial features when the audience is given stimulation from the non-excited state. To present the emotion state of the audience, this research uses a Valence-Arousal model. So, this research suggests an overall framework composed of the following six steps. The first step is a step of collecting data for modeling. The data was collected from people participated in the 2012 Seoul DMC Culture Open, and the collected data was used for the experiments. The second step extracts 64 facial features from the collected data and compensates the facial feature values. The third step generates independent and dependent variables of an artificial neural network model. The fourth step extracts the independent variable that affects the dependent variable using the statistical technique. The fifth step builds an artificial neural network model and performs a learning process using train set and test set. Finally the last sixth step is to validate the prediction performance of artificial neural network model using the validation data set. The proposed model is compared with statistical predictive model to see whether it had better performance or not. As a result, although the data set in this experiment had much noise, the proposed model showed better results when the model was compared with multiple regression analysis model. If the prediction model of audience reaction was used in the real exhibition, it will be able to provide countermeasures and services appropriate to the audience's reaction viewing the exhibits. Specifically, if the arousal of audience about Exhibits is low, Action to increase arousal of the audience will be taken. For instance, we recommend the audience another preferred contents or using a light or sound to focus on these exhibits. In other words, when planning future exhibitions, planning the exhibition to satisfy various audience preferences would be possible. And it is expected to foster a personalized environment to concentrate on the exhibits. But, the proposed model in this research still shows the low prediction accuracy. The cause is in some parts as follows : First, the data covers diverse visitors of real exhibitions, so it was difficult to control the optimized experimental environment. So, the collected data has much noise, and it would results a lower accuracy. In further research, the data collection will be conducted in a more optimized experimental environment. The further research to increase the accuracy of the predictions of the model will be conducted. Second, using changes of facial expression only is thought to be not enough to extract audience emotions. If facial expression is combined with other responses, such as the sound, audience behavior, it would result a better result.

A Generation Method of Comic Facial Expressions for Intelligent Avatar Communications (지적 아바타 통신을 위한 코믹한 얼굴 표정의 생성법)

  • ;;Yoshinao Aoki
    • Proceedings of the IEEK Conference
    • /
    • 2000.11d
    • /
    • pp.227-230
    • /
    • 2000
  • The sign-language can be used as an auxiliary communication means between avatars of different languages in cyberspace. At that time, an intelligent communication method can also be utilized to achieve real-time communication, where intelligently coded data (joint angles for arm gestures and action units for facial emotions) are transmitted instead of real pictures. In this paper, a method of generating the facial gesture CG animation on different avatar models is provided. At first, to edit emotional expressions efficiently, a comic-style facial model having only eyebrows, eyes, nose, and mouth is employed. Then generation of facial emotion animation with the parameters is also investigated. Experimental results show a possibility that the method could be used for the intelligent avatar communications between Korean and Japanese.

  • PDF

Real-Time Emotional Change Recognition Technique using EEG signal (뇌전도 신호를 이용한 실시간 감정변화 인식 기법)

  • Choi, Dong Yoon;Lee, Sang Hyuk;Song, Byung Cheol
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.11a
    • /
    • pp.131-133
    • /
    • 2019
  • 감정인식 기술은 인간과 인공지능이 감정적인 상호작용을 위하여 매우 중요한 기술이다. 얼굴영상 기반의 감정인식 연구는 가장 널리 진행되어 왔으나 우리는 표정에서 드러나지 않는 내면의 감정을 인식하기 위하여 뇌전도를 이용한 감정인식 기법을 제안한다. 먼저 2 초 구간의 뇌전도 신호에 대하여 time, frequency, time-frequency 영역에서 특징점을 추출하고 이를 3 개의 fully connected layer 로 구성되어 있는 regressor 를 이용하여 valence 정보를 추정한다. MAHNOB-HCI 데이터세트에 대한 실험결과에서 제안기법은 종래기법보다 낮은 오차를 보이며 감정의 변화를 실시간으로 인식하는 결과를 보인다.

  • PDF

Emotional System Applied to Android Robot for Human-friendly Interaction (인간 친화적 상호작용을 위한 안드로이드 로봇의 감성 시스템)

  • Lee, Tae-Geun;Lee, Dong-Uk;So, Byeong-Rok;Lee, Ho-Gil
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.95-98
    • /
    • 2007
  • 본 논문은 한국생산기술연구원에서 개발된 안드로이드 로봇(EveR Series) 플랫폼에 적용된 감성 시스템에 관한 내용을 제시한다. EveR 플랫폼은 얼굴 표정, 제스처, 음성합성을 수행 할 수 있는 플랫폼으로써 감성 시스템을 적용하여 인간 친화적인 상호작용을 원활하게 한다. 감성 시스템은 로봇에 동기를 부여하는 동기 모듈(Motivation Module), 다양한 감정들을 가지고 있는 감정 모듈(Emotion Module), 감정들, 제스처, 음성에 영향을 미치는 성격 모듈(Personality Module), 입력 받은 자극들과 상황들에 가중치를 결정하는 기억 모듈(Memory Module)로 구성되어 있다. 감성 시스템은 입력으로 음성, 텍스트, 비전, 촉각 및 상황 정보가 들어오고 감정의 선택과 가중치, 행동, 제스처를 출력하여 인간과의 대화에 있어서 자연스러움을 유도한다.

  • PDF

Automatic Human Emotion Recognition from Speech and Face Display - A New Approach (인간의 언어와 얼굴 표정에 통하여 자동적으로 감정 인식 시스템 새로운 접근법)

  • Luong, Dinh Dong;Lee, Young-Koo;Lee, Sung-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06b
    • /
    • pp.231-234
    • /
    • 2011
  • Audiovisual-based human emotion recognition can be considered a good approach for multimodal humancomputer interaction. However, the optimal multimodal information fusion remains challenges. In order to overcome the limitations and bring robustness to the interface, we propose a framework of automatic human emotion recognition system from speech and face display. In this paper, we develop a new approach for fusing information in model-level based on the relationship between speech and face expression to detect automatic temporal segments and perform multimodal information fusion.

Life-like Facial Expression of Mascot-Type Robot Based on Emotional Boundaries (감정 경계를 이용한 로봇의 생동감 있는 얼굴 표정 구현)

  • Park, Jeong-Woo;Kim, Woo-Hyun;Lee, Won-Hyong;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.281-288
    • /
    • 2009
  • Nowadays, many robots have evolved to imitate human social skills such that sociable interaction with humans is possible. Socially interactive robots require abilities different from that of conventional robots. For instance, human-robot interactions are accompanied by emotion similar to human-human interactions. Robot emotional expression is thus very important for humans. This is particularly true for facial expressions, which play an important role in communication amongst other non-verbal forms. In this paper, we introduce a method of creating lifelike facial expressions in robots using variation of affect values which consist of the robot's emotions based on emotional boundaries. The proposed method was examined by experiments of two facial robot simulators.

  • PDF

Detection of Face Expression Based on Deep Learning (딥러닝 기반의 얼굴영상에서 표정 검출에 관한 연구)

  • Won, Chulho;Lee, Bub-ki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.8
    • /
    • pp.917-924
    • /
    • 2018
  • Recently, researches using LBP and SVM have been performed as one of the image - based methods for facial emotion recognition. LBP, introduced by Ojala et al., is widely used in the field of image recognition due to its high discrimination of objects, robustness to illumination change, and simple operation. In addition, CS(Center-Symmetric)-LBP was used as a modified form of LBP, which is widely used for face recognition. In this paper, we propose a method to detect four facial expressions such as expressionless, happiness, surprise, and anger using deep neural network. The validity of the proposed method is verified using accuracy. Based on the existing LBP feature parameters, it was confirmed that the method using the deep neural network is superior to the method using the Adaboost and SVM classifier.

New Rectangle Feature Type Selection for Real-time Facial Expression Recognition (실시간 얼굴 표정 인식을 위한 새로운 사각 특징 형태 선택기법)

  • Kim Do Hyoung;An Kwang Ho;Chung Myung Jin;Jung Sung Uk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.2
    • /
    • pp.130-137
    • /
    • 2006
  • In this paper, we propose a method of selecting new types of rectangle features that are suitable for facial expression recognition. The basic concept in this paper is similar to Viola's approach, which is used for face detection. Instead of previous Haar-like features we choose rectangle features for facial expression recognition among all possible rectangle types in a 3${\times}$3 matrix form using the AdaBoost algorithm. The facial expression recognition system constituted with the proposed rectangle features is also compared to that with previous rectangle features with regard to its capacity. The simulation and experimental results show that the proposed approach has better performance in facial expression recognition.