• Title/Summary/Keyword: Dimensional emotion

Search Result 122, Processing Time 0.023 seconds

Emotional Model via Human Psychological Test and Its Application to Image Retrieval (인간심리를 이용한 감성 모델과 영상검색에의 적용)

  • Yoo, Hun-Woo;Jang, Dong-Sik
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.31 no.1
    • /
    • pp.68-78
    • /
    • 2005
  • A new emotion-based image retrieval method is proposed in this paper. The research was motivated by Soen's evaluation of human emotion on color patterns. Thirteen pairs of adjective words expressing emotion pairs such as like-dislike, beautiful-ugly, natural-unnatural, dynamic-static, warm-cold, gay-sober, cheerful-dismal, unstablestable, light-dark, strong-weak, gaudy-plain, hard-soft, heavy-light are modeled by 19-dimensional color array and $4{\times}3$ gray matrix in off-line. Once the query is presented in text format, emotion model-based query formulation produces the associated color array and gray matrix. Then, images related to the query are retrieved from the database based on the multiplication of color array and gray matrix, each of which is extracted from query and database image. Experiments over 450 images showed an average retrieval rate of 0.61 for the use of color array alone and an average retrieval rate of 0.47 for the use of gray matrix alone.

Proposal of 2D Mood Model for Human-like Behaviors of Robot (로봇의 인간과 유사한 행동을 위한 2차원 무드 모델 제안)

  • Kim, Won-Hwa;Park, Jeong-Woo;Kim, Woo-Hyun;Lee, Won-Hyong;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.5 no.3
    • /
    • pp.224-230
    • /
    • 2010
  • As robots are no longer just working labors in the industrial fields, but stepping into the human's daily lives, interaction and communication between human and robot is becoming essential. For this social interaction with humans, emotion generation of a robot has become necessary, which is a result of very complicated process. Concept of mood has been considered in psychology society as a factor that effects on emotion generation, which is similar to emotion but not the same. In this paper, mood factors for robot considering not only the conditions of the robot itself but also the circumstances of the robot are listed, chosen and finally considered as elements defining a 2-dimensional mood space. Moreover, architecture that combines the proposed mood model and a emotion generation module is given at the end.

Research on GUI(Graphic User Interaction) factors of touch phone by two dimensional emotion model for Grooming users (Grooming 사용자의 2차원 감성 모델링에 의한 터치폰의 GUI 요소에 대한 연구)

  • Kim, Ji-Hye;Hwang, Min-Cheol;Kim, Jong-Hwa;U, Jin-Cheol;Kim, Chi-Jung;Kim, Yong-U;Park, Yeong-Chung;Jeong, Gwang-Mo
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.05a
    • /
    • pp.55-58
    • /
    • 2009
  • 본 연구는 주관적인 사용자의 감성을 객관적으로 정의하여 2차원 감성 모델에 의한 터치폰의 GUI 디자인 요소에 대한 디자인 가이드라인을 제시하고자 한다. 본 연구는 다음과 같은 단계로 연구를 진행하였다. 첫 번째 단계로 그루밍(Grooming) 사용자들의 라이프 스타일을 조사하여 Norman(2002)에 의거한 감각적, 행태적, 그리고 심볼적 세 가지 레벨의 감성요소를 추출하였다. 두 번째 단계로 Russell(1980)의 28개 감성 어휘와 세 단계 감성과의 관계성을 설문하여 감성모델을 구현하였다. 마지막으로 요인분석을 이용하여 대표 감성 어휘를 도출한 후 감성적 터치폰의 GUI(Graphic User Interaction) 디자인 요소를 제시함으로써 사용자의 감성이 반영된 인간 중심적인 제품 디자인을 위한 가이드라인을 제안한다.

  • PDF

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

A Study on the Variation of Music Characteristics based on User Controlled Music Emotion (음악 감성의 사용자 조절에 따른 음악의 특성 변형에 관한 연구)

  • Nguyen, Van Loi;Xubin, Xubin;Kim, Donglim;Lim, Younghwan
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.3
    • /
    • pp.421-430
    • /
    • 2017
  • In this paper, research results on the change of music emotion are described. Our gaol was to provide a method of changing music emotion by a human user. Then we tried to find a way of transforming the contents of the original music into the music whose emotion is similar with the changed emotion. For the purpose, a method of changing the emotion of playing music on two-dimensional plan was describe. Then the original music should be transformed into the music which emotion would be equal to the changed emotion. As the first step a method of deciding which music factors and how much should be changed was presented. Finally the experimental method of editing by sound editor for changing the emotion was described. There are so many research results on the recognition of music emotion. But the try of changing the music emotion is very rare. So this paper would open another way of doing research on music emotion field.

Research on Designing Korean Emotional Dictionary using Intelligent Natural Language Crawling System in SNS (SNS대상의 지능형 자연어 수집, 처리 시스템 구현을 통한 한국형 감성사전 구축에 관한 연구)

  • Lee, Jong-Hwa
    • The Journal of Information Systems
    • /
    • v.29 no.3
    • /
    • pp.237-251
    • /
    • 2020
  • Purpose The research was studied the hierarchical Hangul emotion index by organizing all the emotions which SNS users are thinking. As a preliminary study by the researcher, the English-based Plutchick (1980)'s emotional standard was reinterpreted in Korean, and a hashtag with implicit meaning on SNS was studied. To build a multidimensional emotion dictionary and classify three-dimensional emotions, an emotion seed was selected for the composition of seven emotion sets, and an emotion word dictionary was constructed by collecting SNS hashtags derived from each emotion seed. We also want to explore the priority of each Hangul emotion index. Design/methodology/approach In the process of transforming the matrix through the vector process of words constituting the sentence, weights were extracted using TF-IDF (Term Frequency Inverse Document Frequency), and the dimension reduction technique of the matrix in the emotion set was NMF (Nonnegative Matrix Factorization) algorithm. The emotional dimension was solved by using the characteristic value of the emotional word. The cosine distance algorithm was used to measure the distance between vectors by measuring the similarity of emotion words in the emotion set. Findings Customer needs analysis is a force to read changes in emotions, and Korean emotion word research is the customer's needs. In addition, the ranking of the emotion words within the emotion set will be a special criterion for reading the depth of the emotion. The sentiment index study of this research believes that by providing companies with effective information for emotional marketing, new business opportunities will be expanded and valued. In addition, if the emotion dictionary is eventually connected to the emotional DNA of the product, it will be possible to define the "emotional DNA", which is a set of emotions that the product should have.

Emotion recognition modeling in considering physical and cognitive factors (물리적 인지적 상황을 고려한 감성 인식 모델링)

  • Song S.H.;Park H.H.;Ji Y.K.;Park J.H.;Park J.H.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.06a
    • /
    • pp.1937-1943
    • /
    • 2005
  • The technology of emotion recognition is a crucial factor in day of ubiquitous that it provides various intelligent services for human. This paper intends to make the system which recognizes the human emotions based on 2-dimensional model with two bio signals, GSR and HRV. Since it is too difficult to make model the human's bio system analytically, as a statistical method, Hidden Markov Model(HMM) is used, which uses the transition probability among various states and measurable observation variance. As a result of experiments for each emotion, we can get average recognition rates of 64% for first HMM results and 55% for second HMM results

  • PDF

An Intelligent Emotion Recognition Model Using Facial and Bodily Expressions

  • Jae Kyeong Kim;Won Kuk Park;Il Young Choi
    • Asia pacific journal of information systems
    • /
    • v.27 no.1
    • /
    • pp.38-53
    • /
    • 2017
  • As sensor technologies and image processing technologies make collecting information on users' behavior easy, many researchers have examined automatic emotion recognition based on facial expressions, body expressions, and tone of voice, among others. Specifically, many studies have used normal cameras in the multimodal case using facial and body expressions. Thus, previous studies used a limited number of information because normal cameras generally produce only two-dimensional images. In the present research, we propose an artificial neural network-based model using a high-definition webcam and Kinect to recognize users' emotions from facial and bodily expressions when watching a movie trailer. We validate the proposed model in a naturally occurring field environment rather than in an artificially controlled laboratory environment. The result of this research will be helpful in the wide use of emotion recognition models in advertisements, exhibitions, and interactive shows.

An EEG Study of Emotion Using the International Affective Picture System (국제정서사진체계 ( IAPS ) 를 사용하여 유발된 정서의 뇌파 연구)

  • 이임갑;김지은;이경화;손진훈
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1997.11a
    • /
    • pp.224-227
    • /
    • 1997
  • The International Affective Picture System (IAPS) developed by Lang and colleagues[1] is a world-widely adopted tool in studices relating a variety of physiological indices to subjective emotions induced by the presentation of standardized pictures of which subjective ratings are well established in the three dimensions of pleasure, arousal and dominance. In the present stuey we investigated whether distinctive EEG characteristics for six discrete emotions can be discernible using 12 IAPS pictures that scored highest subjective ratings for one of the 6 categorical emotions, i. e., happiness, sadness, fear, anger, disgust, and surprise (Two slides for each emotion). These pictures as visual stimuli were randomly given to 38 right-handed college students (20-26 years old) with 30 sec of exposure time and 30sec of inter-stimulus interval for each picture while EEG signals were recorded from F3, F4, O1, and O2 referenced to linked ears. The FFT technoque were used to analyze the acquired EEG data. There were significant differences in RP value changes of EEG bands, most prominent in theta, between positive positive and negative emotions, and partial also among negative emotions. This result is in agreement with previous studies[2, 3]. However, it requires further studied to decided whether IAPS could be a useful tool for catigorical approaches to emotion in addition to its traditional uwe, namely dimensional to emotion.

  • PDF

The Weight Decision of Multi-dimensional Features using Fuzzy Similarity Relations and Emotion-Based Music Retrieval (퍼지 유사관계를 이용한 다차원 특징들의 가중치 결정과 감성기반 음악검색)

  • Lim, Jee-Hye;Lee, Joon-Whoan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.5
    • /
    • pp.637-644
    • /
    • 2011
  • Being digitalized, the music can be easily purchased and delivered to the users. However, there is still some difficulty to find the music which fits to someone's taste using traditional music information search based on musician, genre, tittle, album title and so on. In order to reduce the difficulty, the contents-based or the emotion-based music retrieval has been proposed and developed. In this paper, we propose new method to determine the importance of MPEG-7 low-level audio descriptors which are multi-dimensional vectors for the emotion-based music retrieval. We measured the mutual similarities of musics which represent a pair of emotions expressed by opposite meaning in terms of each multi-dimensional descriptor. Then rough approximation, and inter- and intra similarity ratio from the similarity relation are used for determining the importance of a descriptor, respectively. The set of weights based on the importance decides the aggregated similarity measure, by which emotion-based music retrieval can be achieved. The proposed method shows better result than previous method in terms of the average number of satisfactory musics in the experiment emotion-based retrieval based on content-based search.