• Title/Summary/Keyword: Emotion Computing

Search Result 107, Processing Time 0.024 seconds

Impact Analysis of nonverbal multimodals for recognition of emotion expressed virtual humans (가상 인간의 감정 표현 인식을 위한 비언어적 다중모달 영향 분석)

  • Kim, Jin Ok
    • Journal of Internet Computing and Services
    • /
    • v.13 no.5
    • /
    • pp.9-19
    • /
    • 2012
  • Virtual human used as HCI in digital contents expresses his various emotions across modalities like facial expression and body posture. However, few studies considered combinations of such nonverbal multimodal in emotion perception. Computational engine models have to consider how a combination of nonverbal modal like facial expression and body posture will be perceived by users to implement emotional virtual human, This paper proposes the impacts of nonverbal multimodal in design of emotion expressed virtual human. First, the relative impacts are analysed between different modals by exploring emotion recognition of modalities for virtual human. Then, experiment evaluates the contribution of the facial and postural congruent expressions to recognize basic emotion categories, as well as the valence and activation dimensions. Measurements are carried out to the impact of incongruent expressions of multimodal on the recognition of superposed emotions which are known to be frequent in everyday life. Experimental results show that the congruence of facial and postural expression of virtual human facilitates perception of emotion categories and categorical recognition is influenced by the facial expression modality, furthermore, postural modality are preferred to establish a judgement about level of activation dimension. These results will be used to implementation of animation engine system and behavior syncronization for emotion expressed virtual human.

Perceived Controllability of the Ubiquitous Computing Devices as a Function of Design Familiarity and Complexity (유비쿼터스 컴퓨팅의 친밀감과 복잡성에 따른 사용자 통제감 지각 효과)

  • Lee, Ji-seon;Lee, Kyung-Soo;Lim, Seong-Joon;Sohn, Young-Woo
    • Science of Emotion and Sensibility
    • /
    • v.10 no.4
    • /
    • pp.555-569
    • /
    • 2007
  • Ubiquitous computing environment is the new surroundings, currently being realized by modern technology. Since previous usability tests are not suitable for this new technological environment, new research on psychological factors needs to be conducted. The evaluation of service scenarios also will be required in tandem with traditional usability tests of operation devices. Consequently, this study classified usability factors from a scenario perspective and was conducted with a focus on the psychological elements of controllability from a mechanical perspective. Study 1 reclassified the factors of existing usability tests according to similarity and application scenarios, and ten important groups of factors were newly created. Study 2 focused specifically on the design side of a device and showed that the degree of familiarity with the ubiquitous computing device and its complexity brought about a difference in perceived controllability of the device.

  • PDF

EEG Dimensional Reduction with Stack AutoEncoder for Emotional Recognition using LSTM/RNN (LSTM/RNN을 사용한 감정인식을 위한 스택 오토 인코더로 EEG 차원 감소)

  • Aliyu, Ibrahim;Lim, Chang-Gyoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.4
    • /
    • pp.717-724
    • /
    • 2020
  • Due to the important role played by emotion in human interaction, affective computing is dedicated in trying to understand and regulate emotion through human-aware artificial intelligence. By understanding, emotion mental diseases such as depression, autism, attention deficit hyperactivity disorder, and game addiction will be better managed as they are all associated with emotion. Various studies for emotion recognition have been conducted to solve these problems. In applying machine learning for the emotion recognition, the efforts to reduce the complexity of the algorithm and improve the accuracy are required. In this paper, we investigate emotion Electroencephalogram (EEG) feature reduction and classification using Stack AutoEncoder (SAE) and Long-Short-Term-Memory/Recurrent Neural Networks (LSTM/RNN) classification respectively. The proposed method reduced the complexity of the model and significantly enhance the performance of the classifiers.

A Study on the Automatic Monitoring System for the Contact Center Using Emotion Recognition and Keyword Spotting Method (감성인식과 핵심어인식 기술을 이용한 고객센터 자동 모니터링 시스템에 대한 연구)

  • Yoon, Won-Jung;Kim, Tae-Hong;Park, Kyu-Sik
    • Journal of Internet Computing and Services
    • /
    • v.13 no.3
    • /
    • pp.107-114
    • /
    • 2012
  • In this paper, we proposed an automatic monitoring system for contact center in order to manage customer's complaint and agent's quality. The proposed system allows more accurate monitoring using emotion recognition and keyword spotting method for neutral/anger voice emotion. The system can provide professional consultation and management for the customer with language violence, such as abuse and sexual harassment. We developed a method of building robust algorithm on heterogeneous speech DB of many unspecified customers. Experimental results confirm the stable and improved performance using real contact center speech data.

A Study on Digital Clothing Design by Characteristics of Ubiquitous Environment (유비쿼터스 환경 특성에 의한 디지털 의류 디자인에 관한 연구)

  • Kim, Ji-Eon
    • Journal of the Korean Society of Costume
    • /
    • v.57 no.3 s.112
    • /
    • pp.23-36
    • /
    • 2007
  • It is important that ubiquitous technology changes paradigm of thought, not simple definition in the 21st digital era. Characteristics of ubiquitous computing are pervasive, disappearing, invisible, calm through environment. As IT Technology develops, designers, computer scientists, chemists, performance artists cooperate in order to find out the best way to make desirable digital clothing in the future, with the merit of each part. Digital clothing defines clothes of new generation equipped computer, digital installations. Digital clothing design demands intercept of electromagnetic waves, light-weight and esthetic appearance, for it is attached high-technology equipment near body. The purpose of this study is to analyze design features of digital clothing according to ubiquitous characteristics. The methods of this study are documentary research of previous study and case study. In the theoretical study, ubiquitous characteristics are function-intensive by convergence, interactivity, embedded mobility and human & emotion-oriented attributes. Based on ubiquitous characteristics, digital clothing design classified function-intensive design by convergence, design for Interactivity and multi-sensible & emotion-oriented design, because embedded mobility is a basic element of ubiquitous environment. The early days digital clothing design is function-intensive design, and have esthetic appearances and design for interactivity increasingly. Recently digital clothing design is expressed multi-sensible and emotion-oriented design.

Statistical Model for Emotional Video Shot Characterization (비디오 셧의 감정 관련 특징에 대한 통계적 모델링)

  • 박현재;강행봉
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.12C
    • /
    • pp.1200-1208
    • /
    • 2003
  • Affective computing plays an important role in intelligent Human Computer Interactions(HCI). To detect emotional events, it is desirable to construct a computing model for extracting emotion related features from video. In this paper, we propose a statistical model based on the probabilistic distribution of low level features in video shots. The proposed method extracts low level features from video shots and then from a GMM(Gaussian Mixture Model) for them to detect emotional shots. As low level features, we use color, camera motion and sequence of shot lengths. The features can be modeled as a GMM by using EM(Expectation Maximization) algorithm and the relations between time and emotions are estimated by MLE(Maximum Likelihood Estimation). Finally, the two statistical models are combined together using Bayesian framework to detect emotional events in video.

An Exploratory Investigation on Visual Cues for Emotional Indexing of Image (이미지 감정색인을 위한 시각적 요인 분석에 관한 탐색적 연구)

  • Chung, SunYoung;Chung, EunKyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.48 no.1
    • /
    • pp.53-73
    • /
    • 2014
  • Given that emotion-based computing environment has grown recently, it is necessary to focus on emotional access and use of multimedia resources including images. The purpose of this study aims to identify the visual cues for emotion in images. In order to achieve it, this study selected five basic emotions such as love, happiness, sadness, fear, and anger and interviewed twenty participants to demonstrate the visual cues for emotions. A total of 620 visual cues mentioned by participants were collected from the interview results and coded according to five categories and 18 sub-categories for visual cues. Findings of this study showed that facial expressions, actions / behaviors, and syntactic features were found to be significant in terms of perceiving a specific emotion of the image. An individual emotion from visual cues demonstrated distinctive characteristics. The emotion of love showed a higher relation with visual cues such as actions and behaviors, and the happy emotion is substantially related to facial expressions. In addition, the sad emotion was found to be perceived primarily through actions and behaviors and the fear emotion is perceived considerably through facial expressions. The anger emotion is highly related to syntactic features such as lines, shapes, and sizes. Findings of this study implicated that emotional indexing could be effective when content-based features were considered in combination with concept-based features.

Similarity Evaluation of Popular Music based on Emotion and Structure of Lyrics (가사의 감정 분석과 구조 분석을 이용한 노래 간 유사도 측정)

  • Lee, Jaehwan;Lim, Hyewon;Kim, Hyoung-Joo
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.10
    • /
    • pp.479-487
    • /
    • 2016
  • People can listen to almost every type of music by music streaming services without possessing music. Ironically it is difficult to choose what to listen to. A music recommendation system helps people in making a choice. However, existing recommendation systems have high computation complexity and do not consider context information. Emotion is one of the most important context information of music. Lyrics can be easily computed with various language processing techniques and can even be used to extract emotion of music from itself. We suggest a music-level similarity evaluation method using emotion and structure. Our result shows that it is important to consider semantic information when we evaluate similarity of music.

A Study on Emotion Recognition of Chunk-Based Time Series Speech (청크 기반 시계열 음성의 감정 인식 연구)

  • Hyun-Sam Shin;Jun-Ki Hong;Sung-Chan Hong
    • Journal of Internet Computing and Services
    • /
    • v.24 no.2
    • /
    • pp.11-18
    • /
    • 2023
  • Recently, in the field of Speech Emotion Recognition (SER), many studies have been conducted to improve accuracy using voice features and modeling. In addition to modeling studies to improve the accuracy of existing voice emotion recognition, various studies using voice features are being conducted. This paper, voice files are separated by time interval in a time series method, focusing on the fact that voice emotions are related to time flow. After voice file separation, we propose a model for classifying emotions of speech data by extracting speech features Mel, Chroma, zero-crossing rate (ZCR), root mean square (RMS), and mel-frequency cepstrum coefficients (MFCC) and applying them to a recurrent neural network model used for sequential data processing. As proposed method, voice features were extracted from all files using 'librosa' library and applied to neural network models. The experimental method compared and analyzed the performance of models of recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit (GRU) using the Interactive emotional dyadic motion capture Interactive Emotional Dyadic Motion Capture (IEMOCAP) english dataset.

Crowd Psychological and Emotional Computing Based on PSMU Algorithm

  • Bei He
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.8
    • /
    • pp.2119-2136
    • /
    • 2024
  • The rapid progress of social media allows more people to express their feelings and opinions online. Many data on social media contains people's emotional information, which can be used for people's psychological analysis and emotional calculation. This research is based on the simplified psychological scale algorithm of multi-theory integration. It aims to accurately analyze people's psychological emotion. According to the comparative analysis of algorithm performance, the results show that the highest recall rate of the algorithm in this study is 95%, while the highest recall rate of the item response theory algorithm and the social network analysis algorithm is 68% and 87%. The acceleration ratio and data volume of the research algorithm are analyzed. The results show that when 400,000 data are calculated in the Hadoop cluster and there are 8 nodes, the maximum acceleration ratio is 40%. When the data volume is 8GB, the maximum scale ratio of 8 nodes is 43%. Finally, we carried out an empirical analysis on the model that compute the population's psychological and emotional conditions. During the analysis, the psychological simplification scale algorithm was adopted and multiple theories were taken into account. Then, we collected negative comments and expressions about Japan's discharge of radioactive water in microblog and compared them with the trend derived by the model. The results were consistent. Therefore, this research model has achieved good results in the emotion classification of microblog comments.