• Title/Summary/Keyword: emotion engineering

Search Result 793, Processing Time 0.023 seconds

Development of an Emotion Recognition Robot using a Vision Method (비전 방식을 이용한 감정인식 로봇 개발)

  • Shin, Young-Geun;Park, Sang-Sung;Kim, Jung-Nyun;Seo, Kwang-Kyu;Jang, Dong-Sik
    • IE interfaces
    • /
    • v.19 no.3
    • /
    • pp.174-180
    • /
    • 2006
  • This paper deals with the robot system of recognizing human's expression from a detected human's face and then showing human's emotion. A face detection method is as follows. First, change RGB color space to CIElab color space. Second, extract skin candidate territory. Third, detect a face through facial geometrical interrelation by face filter. Then, the position of eyes, a nose and a mouth which are used as the preliminary data of expression, he uses eyebrows, eyes and a mouth. In this paper, the change of eyebrows and are sent to a robot through serial communication. Then the robot operates a motor that is installed and shows human's expression. Experimental results on 10 Persons show 78.15% accuracy.

VISCOSITY RESISTANCE CONTROL OF INTELLIGENT PROSTHETIC-LEGS

  • Hashimoto, Minoru;Ono, Kenji
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.328-329
    • /
    • 2000
  • A viscosity resistance control method of the intelligent prosthetic legs is studied using an optimal control theory. The simulated results suggests that it is important to control the viscosity of the prosthetic knee joint in one period of walking to improve the usability. In this paper we describe modeling of the thigh prosthetic legs, optimal control and simulated results.

  • PDF

COMPUTATIONAL MODELING OF KANSEI PROCESSES FOR HUMAN-CENTERED INFORMATION SYSTEMS

  • Kato, Toshikazu
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2002.05a
    • /
    • pp.3-8
    • /
    • 2002
  • This paper introduces the basic concept of computational modeling of perception processes for multimedia data. Such processes are modeled as hierarchical inter- and intra- relationships amongst information in physical, physiological, psychological and cognitive layers in perception. Based on our framework, this paper gives the algorithms for content-based retrieval for multimedia database systems.

  • PDF

A Study of Use of Body Motions and Body-weighted Values for Motion Display in Virtual Characters (신체 가중치를 이용한 동일 감정 표현의 몸동작 변형)

  • Lee, Chang-Sook;Jin, Da-Xing;Um, Ky-Hyun;Cho, Kyung-Eun
    • Journal of Korea Game Society
    • /
    • v.10 no.6
    • /
    • pp.125-135
    • /
    • 2010
  • Body motions are commonly used to express emotions in virtual characters based on body parts, which are frequently employed in games. For this purpose, it is necessary to create different types of animations corresponding to the emotions shown by virtual characters. Therefore, a large of number of animations should be created for different gestures depending on the level of human emotion. In this paper, we propose a method for displaying gestures with various degrees of complexity on the basis of the level of emotion in virtual characters. In particular, this method can be used to display passive and exaggerated expressions by adding weighted values to the frames that rotate the characters to make them show different gestures depending on the level of emotion. To verify the effectiveness of the proposed method, we use the Emotional Animation Tool (EATool), with which body-weighted values can be applied to the actual or virtual characters. After assigning different emotions to walking motions in the newly developed environment, we apply different body-weighted values depending on the level of each emotion. The results of a comparative test reveal that a given type of walking motion differs with the level of emotion.

A Study on the Performance of Music Retrieval Based on the Emotion Recognition (감정 인식을 통한 음악 검색 성능 분석)

  • Seo, Jin Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.3
    • /
    • pp.247-255
    • /
    • 2015
  • This paper presents a study on the performance of the music search based on the automatically recognized music-emotion labels. As in the other media data, such as speech, image, and video, a song can evoke certain emotions to the listeners. When people look for songs to listen, the emotions, evoked by songs, could be important points to consider. However; very little study has been done on the performance of the music-emotion labels to the music search. In this paper, we utilize the three axes of human music perception (valence, activity, tension) and the five basic emotion labels (happiness, sadness, tenderness, anger, fear) in measuring music similarity for music search. Experiments were conducted on both genre and singer datasets. The search accuracy of the proposed emotion-based music search was up to 75 % of that of the conventional feature-based music search. By combining the proposed emotion-based method with the feature-based method, we achieved up to 14 % improvement of search accuracy.

Emotion Recognition Method Using FLD and Staged Classification Based on Profile Data (프로파일기반의 FLD와 단계적 분류를 이용한 감성 인식 기법)

  • Kim, Jae-Hyup;Oh, Na-Rae;Jun, Gab-Song;Moon, Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.6
    • /
    • pp.35-46
    • /
    • 2011
  • In this paper, we proposed the method of emotion recognition using staged classification model and Fisher's linear discriminant. By organizing the staged classification model, the proposed method improves the classification rate on the Fisher's feature space with high complexity. The staged classification model is achieved by the successive combining of binary classification model which has simple structure and high performance. On each stage, it forms Fisher's linear discriminant according to the two groups which contain each emotion class, and generates the binary classification model by using Adaboost method on the Fisher's space. Whole learning process is repeatedly performed until all the separations of emotion classes are finished. In experimental results, the proposed method provides about 72% classification rate on 8 classes of emotion and about 93% classification rate on specific 3 classes of emotion.

EEG Dimensional Reduction with Stack AutoEncoder for Emotional Recognition using LSTM/RNN (LSTM/RNN을 사용한 감정인식을 위한 스택 오토 인코더로 EEG 차원 감소)

  • Aliyu, Ibrahim;Lim, Chang-Gyoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.4
    • /
    • pp.717-724
    • /
    • 2020
  • Due to the important role played by emotion in human interaction, affective computing is dedicated in trying to understand and regulate emotion through human-aware artificial intelligence. By understanding, emotion mental diseases such as depression, autism, attention deficit hyperactivity disorder, and game addiction will be better managed as they are all associated with emotion. Various studies for emotion recognition have been conducted to solve these problems. In applying machine learning for the emotion recognition, the efforts to reduce the complexity of the algorithm and improve the accuracy are required. In this paper, we investigate emotion Electroencephalogram (EEG) feature reduction and classification using Stack AutoEncoder (SAE) and Long-Short-Term-Memory/Recurrent Neural Networks (LSTM/RNN) classification respectively. The proposed method reduced the complexity of the model and significantly enhance the performance of the classifiers.

Development of Scent Display and Its Authoring Tool

  • Kim, Jeong Do;Choi, Ji Hoon;Lim, Seung Ju;Park, Sung Dae;Kim, Jung Ju;Ahn, Chung Hyun
    • ETRI Journal
    • /
    • v.37 no.1
    • /
    • pp.88-96
    • /
    • 2015
  • The purpose of this study is to design an authoring tool and a corresponding device for an olfactory display that can augment the immersion and reality in broadcasting services. The developed authoring tool allows an olfactory display to be properly synchronized with the existing video service by applying the standardized format using ISO/IEC 23005 (MPEG-V) and the corresponding developed scent display device. To propose the proper data format for the olfactory display, we have analyzed both the multimodal combination and the cross-modality related to the olfactory display. From the results of the analysis, we derived a set of olfactory parameters for the olfactory display that are related to emotion. The analyzed parameters related to emotion in an olfactory display are synchronization, scent intensity, scent persistence, and hedonic tone. These parameters should be controlled so that the olfactory display can be in harmony with the existing media to augment emotion. In addition, we developed a scent display device that can generate many kinds of scents and that satisfies design conditions for olfactory parameters that are for use with broadcasting services.