• Title/Summary/Keyword: Recognizing Emotion

Search Result 106, Processing Time 0.03 seconds

The Recognition Method for Focus Level using ECG(electrocardiogram) (심전도를 이용한 집중도 인식 방법)

  • Lee, Dong Won;Park, Sangin;Whang, Mincheol
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.2
    • /
    • pp.370-377
    • /
    • 2018
  • Focus level has been important mental state in user study. Cardiac response has been related to focus and less clarified. The study was to determine cardiac parameters for recognizing focus level. The sixty participants were asked to play shooting game designed to control two focus levels. Electrocardiogram was measured during task. The parameters of time domain and frequency domain were determined from ECG. As a result of independent t-test, RRI, SDNN, rMSSD and pNN50 of time domain indicator were statistically significant in recognizing focus level. LF, HF, lnLF and lnHF of frequency domain were observed to be significant indicator. The rule base for recognition has been developed by the combination of RRI, rMSSD and lnHF. The rule base has been verified from another sixty data samples. The recognition accuracy were 95%. This study proposed significant cardiac indicators for recognizing focus level. The results provides objective measurement of focus in user interaction design in the fields of contents industry and service design.

A Robotic System with Behavioral Intervention facilitating Eye Contact and Facial Emotion Recognition of Children with Autism Spectrum Disorders (자폐 범주성 장애 아동의 눈맞춤과 얼굴표정읽기 기능향상을 위한 행동 중재용 로봇시스템)

  • Yun, Sang-Seok;Kim, Hyuksoo;Choi, JongSuk;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.2
    • /
    • pp.61-69
    • /
    • 2015
  • In this paper, we propose and examine the feasibility of the robot-assisted behavioral intervention system so as to strengthen positive response of the children with autism spectrum disorder (ASD) for learning social skills. Based on well-known behavioral treatment protocols, the robot offers therapeutic training elements of eye contact and emotion reading respectively in child-robot interaction, and it subsequently accomplishes pre-allocated meaningful acts by estimating the level of children's reactivity from reliable recognition modules, as a coping strategy. Furthermore, for the purpose of labor saving and attracting children's interest, we implemented the robotic stimulation configuration with semi-autonomous actions capable of inducing intimacy and tension to children in instructional trials. From these configurations, by evaluating the ability of recognizing human activity as well as by showing improved reactivity for social training, we verified that the proposed system has some positive effects on social development, targeted for preschoolers who have a high functioning level.

Emotion Recognition Using Color and Pattern in Textile Images (컬러와 패턴을 이용한 텍스타일 영상에서의 감정인식 시스템)

  • Shin, Yun-Hee;Kim, Young-Rae;Kim, Eun-Yi
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.6
    • /
    • pp.154-161
    • /
    • 2008
  • In this paper, a novel method is proposed using color and pattern information for recognizing some emotions included in a fertile. Here we use 10 Kobayashi emotion to represent emotions. - { romantic, clear, natural, casual, elegant chic, dynamic, classic, dandy, modem } The proposed system is composed of feature extraction and classification. To transform the subjective emotions as physical visual features, we extract representative colors and Patterns from textile. Here, the representative color prototypes are extracted by color quantization method, and patterns exacted by wavelet transform followed by statistical analysis. These exacted features are given as input to the neural network (NN)-based classifiers, which decides whether or not a textile had the corresponding emotion. When assessing the effectiveness of the proposed system with 389 textiles collected from various application domains such as interior, fashion, and artificial ones. The results showed that the proposed method has the precision of 100% and the recall of 99%, thereby it can be used in various textile industries.

Performance Enhancement of Phoneme and Emotion Recognition by Multi-task Training of Common Neural Network (공용 신경망의 다중 학습을 통한 음소와 감정 인식의 성능 향상)

  • Kim, Jaewon;Park, Hochong
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.742-749
    • /
    • 2020
  • This paper proposes a method for recognizing both phoneme and emotion using a common neural network and a multi-task training method for the common neural network. The common neural network performs the same function for both recognition tasks, which corresponds to the structure of multi-information recognition of human using a single auditory system. The multi-task training conducts a feature modeling that is commonly applicable to multiple information and provides generalized training, which enables to improve the performance by reducing an overfitting occurred in the conventional individual training for each information. A method for increasing phoneme recognition performance is also proposed that applies weight to the phoneme in the multi-task training. When using the same feature vector and neural network, it is confirmed that the proposed common neural network with multi-task training provides higher performance than the individual one trained for each task.

Emotion Classification of User's Utterance for a Dialogue System (대화 시스템을 위한 사용자 발화 문장의 감정 분류)

  • Kang, Sang-Woo;Park, Hong-Min;Seo, Jung-Yun
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.4
    • /
    • pp.459-480
    • /
    • 2010
  • A dialogue system includes various morphological analyses for recognizing a user's intention from the user's utterances. However, a user can represent various intentions via emotional states in addition to morphological expressions. Thus, a user's emotion recognition can analyze a user's intention in various manners. This paper presents a new method to automatically recognize a user's emotion for a dialogue system. For general emotions, we define nine categories using a psychological approach. For an optimal feature set, we organize a combination of sentential, a priori, and context features. Then, we employ a support vector machine (SVM) that has been widely used in various learning tasks to automatically classify a user's emotions. The experiment results show that our method has a 62.8% F-measure, 15% higher than the reference system.

  • PDF

Study on Heart Rate Variability and PSD Analysis of PPG Data for Emotion Recognition (감정 인식을 위한 PPG 데이터의 심박변이도 및 PSD 분석)

  • Choi, Jin-young;Kim, Hyung-shin
    • Journal of Digital Contents Society
    • /
    • v.19 no.1
    • /
    • pp.103-112
    • /
    • 2018
  • In this paper, we propose a method of recognizing emotions using PPG sensor which measures blood flow according to emotion. From the existing PPG signal, we use a method of determining positive emotions and negative emotions in the frequency domain through PSD (Power Spectrum Density). Based on James R. Russell's two-dimensional prototype model, we classify emotions as joy, sadness, irritability, and calmness and examine their association with the magnitude of energy in the frequency domain. It is significant that this study used the same PPG sensor used in wearable devices to measure the top four kinds of emotions in the frequency domain through image experiments. Through the questionnaire, the accuracy, the immersion level according to the individual, the emotional change, and the biofeedback for the image were collected. The proposed method is expected to be various development such as commercial application service using PPG and mobile application prediction service by merging with context information of existing smart phone.

A Study on Intelligent Emotional Recommendation System Using Biological Information (생체정보를 이용한 지능형 감성 추천시스템에 관한 연구)

  • Kim, Tae-Yeun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.3
    • /
    • pp.215-222
    • /
    • 2021
  • As the importance of human-computer interaction (Human Computer Interface) technology grows and research on HCI is progressing, it is inferred about the research emotion inference or the computer reaction according to the user's intention, not the computer reaction by the standard input of the user. Stress is an unavoidable result of modern human civilization, and it is a complex phenomenon, and depending on whether or not there is control, human activity ability can be seriously changed. In this paper, we propose an intelligent emotional recommendation system using music as a way to relieve stress after measuring heart rate variability (HRV) and acceleration photoplethymogram (APG) increased through stress as part of human-computer interaction. The differential evolution algorithm was used to extract reliable data by acquiring and recognizing the user's biometric information, that is, the stress index, and emotional inference was made through the semantic web based on the obtained stress index step by step. In addition, by searching and recommending a music list that matches the stress index and changes in emotion, an emotional recommendation system suitable for the user's biometric information was implemented as an application.

Development of a Web Platform System for Worker Protection using EEG Emotion Classification (뇌파 기반 감정 분류를 활용한 작업자 보호를 위한 웹 플랫폼 시스템 개발)

  • Ssang-Hee Seo
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.37-44
    • /
    • 2023
  • As a primary technology of Industry 4.0, human-robot collaboration (HRC) requires additional measures to ensure worker safety. Previous studies on avoiding collisions between collaborative robots and workers mainly detect collisions based on sensors and cameras attached to the robot. This method requires complex algorithms to continuously track robots, people, and objects and has the disadvantage of not being able to respond quickly to changes in the work environment. The present study was conducted to implement a web-based platform that manages collaborative robots by recognizing the emotions of workers - specifically their perception of danger - in the collaborative process. To this end, we developed a web-based application that collects and stores emotion-related brain waves via a wearable device; a deep-learning model that extracts and classifies the characteristics of neutral, positive, and negative emotions; and an Internet-of-things (IoT) interface program that controls motor operation according to classified emotions. We conducted a comparative analysis of our system's performance using a public open dataset and a dataset collected through actual measurement, achieving validation accuracies of 96.8% and 70.7%, respectively.

Emotion Recognition System Using Neural Networks in Textile Images (신경망을 이용한 텍스타일 영상에서의 감성인식 시스템)

  • Kim, Na-Yeon;Shin, Yun-Hee;Kim, Soo-Jeong;Kim, Jee-In;Jeong, Karp-Joo;Koo, Hyun-Jin;Kim, Eun-Yi
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.9
    • /
    • pp.869-879
    • /
    • 2007
  • This paper proposes a neural network based approach for automatic human emotion recognition in textile images. To investigate the correlation between the emotion and the pattern, the survey is conducted on 20 peoples, which shows that a emotion is deeply affected by a pattern. Accordingly, a neural network based classifier is used for recognizing the pattern included in textiles. In our system, two schemes are used for describing the pattern; raw-pixel data extraction scheme using auto-regressive method (RDES) and wavelet transformed data extraction scheme (WTDES). To assess the validity of the proposed method, it was applied to recognize the human emotions in 100 textiles, and the results shows that using WTDES guarantees better performance than using RDES. The former produced the accuracy of 71%, while the latter produced the accuracy of 90%. Although there are some differences according to the data extraction scheme, the proposed method shows the accuracy of 80% on average. This result confirmed that our system has the potential to be applied for various application such as textile industry and e-business.

Development of Facial Expression Recognition System based on Bayesian Network using FACS and AAM (FACS와 AAM을 이용한 Bayesian Network 기반 얼굴 표정 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.562-567
    • /
    • 2009
  • As a key mechanism of the human emotion interaction, Facial Expression is a powerful tools in HRI(Human Robot Interface) such as Human Computer Interface. By using a facial expression, we can bring out various reaction correspond to emotional state of user in HCI(Human Computer Interaction). Also it can infer that suitable services to supply user from service agents such as intelligent robot. In this article, We addresses the issue of expressive face modeling using an advanced active appearance model for facial emotion recognition. We consider the six universal emotional categories that are defined by Ekman. In human face, emotions are most widely represented with eyes and mouth expression. If we want to recognize the human's emotion from this facial image, we need to extract feature points such as Action Unit(AU) of Ekman. Active Appearance Model (AAM) is one of the commonly used methods for facial feature extraction and it can be applied to construct AU. Regarding the traditional AAM depends on the setting of the initial parameters of the model and this paper introduces a facial emotion recognizing method based on which is combined Advanced AAM with Bayesian Network. Firstly, we obtain the reconstructive parameters of the new gray-scale image by sample-based learning and use them to reconstruct the shape and texture of the new image and calculate the initial parameters of the AAM by the reconstructed facial model. Then reduce the distance error between the model and the target contour by adjusting the parameters of the model. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotion by using Bayesian Network.