• Title/Summary/Keyword: emotion engineering

Search Result 791, Processing Time 0.026 seconds

Emotional Robotics based on iT_Media

  • Yoon, Joong-Sun;Yoh, Myeung-Sook;Cho, Bong-Kug
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.804-809
    • /
    • 2004
  • Intelligence is thought to be related to interaction rather than a deep but passive thinking. Interactive tangible media "iT_Media" is proposed to explore these issues. Personal robotics is a major area to investigate these ideas. A new design methodology for personal and emotional robotics is proposed. Sciences of the artificial and intelligence have been investigated. A short history of artificial intelligence is presented in terms of logic, heuristic, and mobility; a science of intelligence is presented in terms of imitation and understanding; intelligence issues for robotics and intelligence measures are described. A design methodology for personal robots based on science of emotion is investigated. We investigate three different aspects of design: visceral, behavioral, reflective. We also discuss affect and emotion in robots, robots that sense emotion, robots that induce emotion in people, and implications and ethical issues of emotional robots. Personal robotics for the elderly seems to be a major area in which to explore these ideas.

  • PDF

Emotion Recognition using Prosodic Feature Vector and Gaussian Mixture Model (운율 특성 벡터와 가우시안 혼합 모델을 이용한 감정인식)

  • Kwak, Hyun-Suk;Kim, Soo-Hyun;Kwak, Yoon-Keun
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2002.11b
    • /
    • pp.762-766
    • /
    • 2002
  • This paper describes the emotion recognition algorithm using HMM(Hidden Markov Model) method. The relation between the mechanic system and the human has just been unilateral so far. This is the why people don't want to get familiar with multi-service robots of today. If the function of the emotion recognition is granted to the robot system, the concept of the mechanic part will be changed a lot. Pitch and Energy extracted from the human speech are good and important factors to classify the each emotion (neutral, happy, sad and angry etc.), which are called prosodic features. HMM is the powerful and effective theory among several methods to construct the statistical model with characteristic vector which is made up with the mixture of prosodic features

  • PDF

Noise Robust Emotion Recognition Feature : Frequency Range of Meaningful Signal (음성의 특정 주파수 범위를 이용한 잡음환경에서의 감정인식)

  • Kim Eun-Ho;Hyun Kyung-Hak;Kwak Yoon-Keun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.23 no.5 s.182
    • /
    • pp.68-76
    • /
    • 2006
  • The ability to recognize human emotion is one of the hallmarks of human-robot interaction. Hence this paper describes the realization of emotion recognition. For emotion recognition from voice, we propose a new feature called frequency range of meaningful signal. With this feature, we reached average recognition rate of 76% in speaker-dependent. From the experimental results, we confirm the usefulness of the proposed feature. We also define the noise environment and conduct the noise-environment test. In contrast to other features, the proposed feature is robust in a noise-environment.

Speech Emotion Recognition by Speech Signals on a Simulated Intelligent Robot (모의 지능로봇에서 음성신호에 의한 감정인식)

  • Jang, Kwang-Dong;Kwon, Oh-Wook
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.163-166
    • /
    • 2005
  • We propose a speech emotion recognition method for natural human-robot interface. In the proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes pitch, jitter, duration, and rate of speech. Finally a patten classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5different directions. Experimental results show that the proposed method yields 59% classification accuracy while human classifiers give about 50%accuracy, which confirms that the proposed method achieves performance comparable to a human.

  • PDF

Recognition of Emotion Interaction by Measuring Social Distance in Real Time (실시간 Social Distance 추적에 의한 감성 상호작용 인식 기술 개발)

  • Lee, Hyunwoo;Woo, Jincheol;Cho, Ayoung;Jo, Youngho;Whang, Mincheol
    • Science of Emotion and Sensibility
    • /
    • v.20 no.3
    • /
    • pp.89-96
    • /
    • 2017
  • This study developed a method which recognizes emotional interactions from the social distance by a beacon wearable device. The recognized interaction was evaluated by comparing with the cardiovascular synchrony from photoplethysmogram (PPG). The interaction was recognized when social distance was maintained for a certain period of time. The cardiovascular synchrony was estimated by correlation anaysis between beat per minute (BPM) from PPG. The maintenance time was determined by Mann-Whitney U test between the cardiovascular synchrony of interaction and non-interaction groups. Fifteen groups (2 persons per a group) participated in the experiment and they were asked to wear the beacon and PPG wearable devices in daily life. Experimental results showed that the interaction groups had more higher cardiovascular synchrony than non-interaction groups and the significant interaction time was determined to be 11 seconds (p=.045). Consequently, the real-time measurement and evaluation of the social network in real space was expected to be improved.

Emotion Recognition Based on Frequency Analysis of Speech Signal

  • Sim, Kwee-Bo;Park, Chang-Hyun;Lee, Dong-Wook;Joo, Young-Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.2 no.2
    • /
    • pp.122-126
    • /
    • 2002
  • In this study, we find features of 3 emotions (Happiness, Angry, Surprise) as the fundamental research of emotion recognition. Speech signal with emotion has several elements. That is, voice quality, pitch, formant, speech speed, etc. Until now, most researchers have used the change of pitch or Short-time average power envelope or Mel based speech power coefficients. Of course, pitch is very efficient and informative feature. Thus we used it in this study. As pitch is very sensitive to a delicate emotion, it changes easily whenever a man is at different emotional state. Therefore, we can find the pitch is changed steeply or changed with gentle slope or not changed. And, this paper extracts formant features from speech signal with emotion. Each vowels show that each formant has similar position without big difference. Based on this fact, in the pleasure case, we extract features of laughter. And, with that, we separate laughing for easy work. Also, we find those far the angry and surprise.

Gesture-Based Emotion Recognition by 3D-CNN and LSTM with Keyframes Selection

  • Ly, Son Thai;Lee, Guee-Sang;Kim, Soo-Hyung;Yang, Hyung-Jeong
    • International Journal of Contents
    • /
    • v.15 no.4
    • /
    • pp.59-64
    • /
    • 2019
  • In recent years, emotion recognition has been an interesting and challenging topic. Compared to facial expressions and speech modality, gesture-based emotion recognition has not received much attention with only a few efforts using traditional hand-crafted methods. These approaches require major computational costs and do not offer many opportunities for improvement as most of the science community is conducting their research based on the deep learning technique. In this paper, we propose an end-to-end deep learning approach for classifying emotions based on bodily gestures. In particular, the informative keyframes are first extracted from raw videos as input for the 3D-CNN deep network. The 3D-CNN exploits the short-term spatiotemporal information of gesture features from selected keyframes, and the convolutional LSTM networks learn the long-term feature from the features results of 3D-CNN. The experimental results on the FABO dataset exceed most of the traditional methods results and achieve state-of-the-art results for the deep learning-based technique for gesture-based emotion recognition.

Emotion Classification Using EEG Spectrum Analysis and Bayesian Approach (뇌파 스펙트럼 분석과 베이지안 접근법을 이용한 정서 분류)

  • Chung, Seong Youb;Yoon, Hyun Joong
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.37 no.1
    • /
    • pp.1-8
    • /
    • 2014
  • This paper proposes an emotion classifier from EEG signals based on Bayes' theorem and a machine learning using a perceptron convergence algorithm. The emotions are represented on the valence and arousal dimensions. The fast Fourier transform spectrum analysis is used to extract features from the EEG signals. To verify the proposed method, we use an open database for emotion analysis using physiological signal (DEAP) and compare it with C-SVC which is one of the support vector machines. An emotion is defined as two-level class and three-level class in both valence and arousal dimensions. For the two-level class case, the accuracy of the valence and arousal estimation is 67% and 66%, respectively. For the three-level class case, the accuracy is 53% and 51%, respectively. Compared with the best case of the C-SVC, the proposed classifier gave 4% and 8% more accurate estimations of valence and arousal for the two-level class. In estimation of three-level class, the proposed method showed a similar performance to the best case of the C-SVC.

Effects of Design Detail Types of Ladies Wear on Sensibility and Emotion (여성복 디테일 종류에 따른 감성과 상대적 영향력)

  • Jung, Kyung-Yong;Na, Young-Joo
    • Fashion & Textile Research Journal
    • /
    • v.7 no.2
    • /
    • pp.162-168
    • /
    • 2005
  • The pictures of design details, such as collar type, sleeve type, skirt type, and skirt length, and color tone were evaluated by 377 persons in terms of sensibility and emotion. The data were analyzed by SPSS using ANOVA and Factor analysis to find out the most effective types of details on consumer's sensibility and emotion, and the methods were introduced. The most effective type is skirt length on sensibility and emotion of women's dress. The second type is different according to sensibility and emotion. Sensibility and emotion were composed of three concept: contemporary, mature and character. Sleeve type is second determinant to contemporary concept, and color tone is to mature concept, collar type is to character concept. 41 each details of design were positioned into 3D-concept space to connect each detail type and fashion concept of women's dress.