• Title/Summary/Keyword: emotional valence

Search Result 69, Processing Time 0.028 seconds

Cortical Network Activated by Korean Traditional Opera (Pansori): A Functional MR Study

  • Kim, Yun-Hee;Kim, Hyun-Gi;Kim, Seong-Yong;Kim, Hyoung-Ihl;Todd. B. Parrish;Hong, In-Ki;Sohn, Jin-Hun
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.113-119
    • /
    • 2000
  • The Pansori is a Korean traditional vocal music that has a unique story and melody which converts deep emotion into art. It has both verbal and emotional components. which can be coordinated by large-scale neural network. The purpose of this study is to illustrate the cortical network activated by a Korean traditional opera, Pansori, with different emotional valence using functional MRI (fMRI).Nine right-handed volunteers participated. Their mean age was 25.3 and the mean modified Edinburgh score was +90.1. Activation tasks were designed for the subjects to passively listen to the two parts of Pansories with sad or hilarious emotional valence. White noise was introduced during the control periods. Imaging was conducted on a 1.5T Siemens Vision Vision scanner. Single-shot echoplanar fMRI scans (TR/TE 3840/40 ms, flip angle 90, FOV 220, 64 x 64 matrix, 6mm thickness) were acquired in 20 contiguous slices. Imaging data were motion-corrected, coregistered, normalized, and smoothed using SPM-96 software.Bilateral posterior temporal regions were activated in both of Pansori tasks, but different asymmetry between the tasks was found. The Pansori with sad emotion showed more activation in the light superior temporal regions as well as the right inferior frontal and the orbitofrontal areas than in the right superior temporal regions as well as the right inferior frontal and the orbitofrontal areas than in the left side. In the Pansori with hilarious emotion, there was a remarkable activation in the left hemisphere especially at the posterior temporal and the temporooccipital regions as well as in the left inferior and the prefrontal areas. After subtraction between two tasks, the sad Pansori showed more activation in the right temporoparietal and the orbitofrontal areas, in contrast, the one with hilarious emotion showed more activation in the left temporal and the prefrontal areas. These results suggested that different hemispheric asymmetry and cortical areas are subserved for the processing of different emotional valences carried by the Pansories.

  • PDF

Extraversion and Recognition for Emotional Words: Effects of Valence, Frequency, and Task-difficulty (외향성과 정서단어의 재인 기억: 정서가, 빈도, 과제 난이도 효과)

  • Kang, Eunjoo
    • Korean Journal of Cognitive Science
    • /
    • v.25 no.4
    • /
    • pp.385-416
    • /
    • 2014
  • In this study, memory for emotional words was compared between extraverts and introverts, employing signal detection analysis to distinguish differences in discriminative memory and response bias. Subjects were presented with a study list of emotional words in an encoding session, followed by a recognition session. Effects of task difficulty were examined by varying the nature of the encoding task and the intervals between study and test. For an easy task, with a retention interval of 5 minutes (Study I), introverts exhibited better memory (i.e., higher d') than extraverts, particularly for low-frequency words, and response biases did not differ between these two groups. For a difficult task, with a one-month retention period (Study II), performance was poor overall, and only high-frequency words were remembered; also extraverts adopted a more liberal criterion for 'old' responses (i.e., more hits and more false alarms) for positive emotional-valence words. These results suggest that as task difficulty drives down performance, effects of internal control processes become more apparent, revealing differences in response biases for positive words between extraverts and introverts. These results show that extraversion can distort memory performance for words, depending on their emotional valence.

Wavelet-based Statistical Noise Detection and Emotion Classification Method for Improving Multimodal Emotion Recognition (멀티모달 감정인식률 향상을 위한 웨이블릿 기반의 통계적 잡음 검출 및 감정분류 방법 연구)

  • Yoon, Jun-Han;Kim, Jin-Heon
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.1140-1146
    • /
    • 2018
  • Recently, a methodology for analyzing complex bio-signals using a deep learning model has emerged among studies that recognize human emotions. At this time, the accuracy of emotion classification may be changed depending on the evaluation method and reliability depending on the kind of data to be learned. In the case of biological signals, the reliability of data is determined according to the noise ratio, so that the noise detection method is as important as that. Also, according to the methodology for defining emotions, appropriate emotional evaluation methods will be needed. In this paper, we propose a wavelet -based noise threshold setting algorithm for verifying the reliability of data for multimodal bio-signal data labeled Valence and Arousal and a method for improving the emotion recognition rate by weighting the evaluation data. After extracting the wavelet component of the signal using the wavelet transform, the distortion and kurtosis of the component are obtained, the noise is detected at the threshold calculated by the hampel identifier, and the training data is selected considering the noise ratio of the original signal. In addition, weighting is applied to the overall evaluation of the emotion recognition rate using the euclidean distance from the median value of the Valence-Arousal plane when classifying emotional data. To verify the proposed algorithm, we use ASCERTAIN data set to observe the degree of emotion recognition rate improvement.

Moderating Effects of User Gender and AI Voice on the Emotional Satisfaction of Users When Interacting with a Voice User Interface (음성 인터페이스와의 상호작용에서 AI 음성이 성별에 따른 사용자의 감성 만족도에 미치는 영향)

  • Shin, Jong-Gyu;Kang, Jun-Mo;Park, Yeong-Jin;Kim, Sang-Ho
    • Science of Emotion and Sensibility
    • /
    • v.25 no.3
    • /
    • pp.127-134
    • /
    • 2022
  • This study sought to identify the voice user interface (VUI) design parameters that evoked positive user emotions. Six VUI design parameters that could affect emotional user satisfaction were considered. The moderating effects of user gender and the design parameters were analyzed to determine the appropriate conditions for user satisfaction when interacting with the VUI. An interactive VUI system that could modify the six parameters was implemented using the Wizard of OZ experimental method. User emotions were assessed from the users' facial expression data, which was then converted into a valence score. The frequency analysis and chi-square test found that there were statistically significant moderating gender and AI effects. These results implied that it is beneficial to consider the users' gender when designing voice-based interactions. Adult/male/high-tone voices for males and adult/female/mid-tone voices for females are recommended as general guidelines for future VUI designs. Future analyses that consider various human factors will be able to more delicately assess human-AI interactions from a UX perspective.

A research on the emotion classification and precision improvement of EEG(Electroencephalogram) data using machine learning algorithm (기계학습 알고리즘에 기반한 뇌파 데이터의 감정분류 및 정확도 향상에 관한 연구)

  • Lee, Hyunju;Shin, Dongil;Shin, Dongkyoo
    • Journal of Internet Computing and Services
    • /
    • v.20 no.5
    • /
    • pp.27-36
    • /
    • 2019
  • In this study, experiments on the improvement of the emotion classification, analysis and accuracy of EEG data were proceeded, which applied DEAP (a Database for Emotion Analysis using Physiological signals) dataset. In the experiment, total 32 of EEG channel data measured from 32 of subjects were applied. In pre-processing step, 256Hz sampling tasks of the EEG data were conducted, each wave range of the frequency (Hz); Theta, Slow-alpha, Alpha, Beta and Gamma were then extracted by using Finite Impulse Response Filter. After the extracted data were classified through Time-frequency transform, the data were purified through Independent Component Analysis to delete artifacts. The purified data were converted into CSV file format in order to conduct experiments of Machine learning algorithm and Arousal-Valence plane was used in the criteria of the emotion classification. The emotions were categorized into three-sections; 'Positive', 'Negative' and 'Neutral' meaning the tranquil (neutral) emotional condition. Data of 'Neutral' condition were classified by using Cz(Central zero) channel configured as Reference channel. To enhance the accuracy ratio, the experiment was performed by applying the attributes selected by ASC(Attribute Selected Classifier). In "Arousal" sector, the accuracy of this study's experiments was higher at "32.48%" than Koelstra's results. And the result of ASC showed higher accuracy at "8.13%" compare to the Liu's results in "Valence". In the experiment of Random Forest Classifier adapting ASC to improve accuracy, the higher accuracy rate at "2.68%" was confirmed than Total mean as the criterion compare to the existing researches.

Developing Korean Affect Word List and It's Application (정서가, 각성가 및 구체성 평정을 통한 한국어 정서단어 목록 개발)

  • Hong, Youngji;Nam, Ye-eun;Lee, Yoonhyoung
    • Korean Journal of Cognitive Science
    • /
    • v.27 no.3
    • /
    • pp.377-406
    • /
    • 2016
  • Current lists of the Korean emotion words either do not consider word frequency, or only include emotion expression words such as 'joy' while disregarding emotion inducing words like 'heaven'. Also, none of the current lists contains the concreteness level of the emotional words. Therefore, the current study aimed to develop a new Korean affect word list that makes up such limitations of the current lists. To do so, in experiment 1, valence, arousal and concreteness ratings of the 450 Korean emotion expression nouns and emotion inducing nouns were surveyed with 399 participants. In addition, in experiment 2, an emotional stroop task was performed with the newly developed word list to test the usefulness of the list. The results showed clear patterns of the congruency effects between emotional words and emotion expressing faces. Increased response times and more errors were found when the emotion of the words and faces are non-matched, than when they were matched. The result suggested that the newly developed Korean affect word list can be effectively adapted to studies examining the influence of various aspects emotion.

Multimodal Emotional State Estimation Model for Implementation of Intelligent Exhibition Services (지능형 전시 서비스 구현을 위한 멀티모달 감정 상태 추정 모형)

  • Lee, Kichun;Choi, So Yun;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.1-14
    • /
    • 2014
  • Both researchers and practitioners are showing an increased interested in interactive exhibition services. Interactive exhibition services are designed to directly respond to visitor responses in real time, so as to fully engage visitors' interest and enhance their satisfaction. In order to install an effective interactive exhibition service, it is essential to adopt intelligent technologies that enable accurate estimation of a visitor's emotional state from responses to exhibited stimulus. Studies undertaken so far have attempted to estimate the human emotional state, most of them doing so by gauging either facial expressions or audio responses. However, the most recent research suggests that, a multimodal approach that uses people's multiple responses simultaneously may lead to better estimation. Given this context, we propose a new multimodal emotional state estimation model that uses various responses including facial expressions, gestures, and movements measured by the Microsoft Kinect Sensor. In order to effectively handle a large amount of sensory data, we propose to use stratified sampling-based MRA (multiple regression analysis) as our estimation method. To validate the usefulness of the proposed model, we collected 602,599 responses and emotional state data with 274 variables from 15 people. When we applied our model to the data set, we found that our model estimated the levels of valence and arousal in the 10~15% error range. Since our proposed model is simple and stable, we expect that it will be applied not only in intelligent exhibition services, but also in other areas such as e-learning and personalized advertising.

Emotion Recognition using Short-Term Multi-Physiological Signals

  • Kang, Tae-Koo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.1076-1094
    • /
    • 2022
  • Technology for emotion recognition is an essential part of human personality analysis. To define human personality characteristics, the existing method used the survey method. However, there are many cases where communication cannot make without considering emotions. Hence, emotional recognition technology is an essential element for communication but has also been adopted in many other fields. A person's emotions are revealed in various ways, typically including facial, speech, and biometric responses. Therefore, various methods can recognize emotions, e.g., images, voice signals, and physiological signals. Physiological signals are measured with biological sensors and analyzed to identify emotions. This study employed two sensor types. First, the existing method, the binary arousal-valence method, was subdivided into four levels to classify emotions in more detail. Then, based on the current techniques classified as High/Low, the model was further subdivided into multi-levels. Finally, signal characteristics were extracted using a 1-D Convolution Neural Network (CNN) and classified sixteen feelings. Although CNN was used to learn images in 2D, sensor data in 1D was used as the input in this paper. Finally, the proposed emotional recognition system was evaluated by measuring actual sensors.

A study on the emotional changes of learners according to the emotions provided by virtual characters (가상 캐릭터가 제공하는 감정에 따른 학습자의 감정적 반응에 관한 연구)

  • Choi, Dong-Yeon
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.5
    • /
    • pp.155-164
    • /
    • 2022
  • Considerable interest has been directed toward utilizing virtual environment-based simulations for teacher education which provide authentic experience of classroom environment and repetitive training. Emotional Interaction should be considered for more advanced simulation learning performance. Since emotion is important factors in creative thinking, inspiration, concentration, and learning motivation, identifying learners' emotional interactions and applying these results to teaching simulation is essential activities. In this context, this study aims to identify the objective data for the empathetic response through the movement of the learner's EEG (Electroencephalogram) and eye-tracking, and to provide clues for designing emotional teaching simulation. The results of this study indicated that intended empathetic response was provided and in terms of valence (positive and negative) states and situational interest played an important role in determining areas of interest. The results of this study are expected to provide guidelines for the design of emotional interactions in simulations for teacher education as follow; (a) the development of avatars capable of expressing sophisticated emotions and (b) the development of scenarios suitable for situations that cause emotional reactions.

The effect of emotional priming on the product perceived usability (정서 점화가 제품의 지각된 사용성에 미치는 영향)

  • Kim, Myung Shik;Kim, Hyo Sun;Han, Kwang-Hee
    • Science of Emotion and Sensibility
    • /
    • v.15 no.4
    • /
    • pp.575-584
    • /
    • 2012
  • Decades of psychological research have shown that emotion brings users various kinds of physical and psychological advantages and disadvantages. Moreover, this also impacts human decision-making. However, in spite of the weight of emotion, combined with HCI, relevant research is still insufficient. We hypothesized that the user's temporal emotion could influence the product evaluation, especially in terms of product usability. Two studies were carried out to investigate the effect of induced priming on user evaluations. In exp1, we manipulated participants' temporal emotions using positive and negative images from IAPS. In our results, we saw image priming had a statistically significant effect, with the positive condition group giving the product high ratings for usability. In exp2, emotional image manipulation was conducted with valence and arousal. As a result, we found that the variables of valence and arousal had some interaction effects. These studies have demonstrated that temporally induced emotion could affect users' emotion in different ways, in addition to influencing product evaluations.

  • PDF