• 제목/요약/키워드: Emotion processing

검색결과 311건 처리시간 0.035초

공황장애의 감정 인식 및 조절 메커니즘 (Emotion Recognition and Regulation Mechanism in Panic Disorder)

  • 김유라;이경욱
    • 대한불안의학회지
    • /
    • 제7권1호
    • /
    • pp.3-8
    • /
    • 2011
  • Cognitive models of panic disorder have emphasized cognitive distortions' roles in the maintenance and treatment of panic disorder (PD). However, the patient's difficulty with identifying and managing emotional experiences might contribute to an enduring vulnerability to panic attacks. Numerous researchers, employing emotion processing paradigms and neuroimaging techniques, have investigated the empirical evidence for poor emotion processing in PD. For years, researchers considered that abnormal emotion processing in PD might reflect a dysfunction of the frontal-temporal-limbic circuits. Although neuropsychological studies have not provided consistent results regarding this model, a few studies have tried to find the biological basis of dysfunctional emotion processing in PD. In this article, we examine the possibility of dysregulation of emotion processing in PD. Specifically we discuss the neural basis of emotion processing and the manner in which such neurocognitive impairments may help clarify PD's core symptoms.

다치 논리함수를 이용한 감성처리 모델 (An Emotion Processing Model using Multiple Valued Logic Functions)

  • 정환묵
    • 한국지능시스템학회논문지
    • /
    • 제19권1호
    • /
    • pp.13-18
    • /
    • 2009
  • 인간의 감성은 애매하고 외부로 부터의 자극에 따라 다양하게 변화한다. Plutchik은 기본적인 패턴을 8가지 행동 패턴으로 분류한 감성 모델을 제시하고, 또 순수감성의 조합으로부터 혼합 감성을 추론하였다. 본 논문에서는 다치 논리함수의 차분 성질을 이용한 다치 논리 오토마타 모델을 이용하여 Plutchik의 감성 모델을 처리할 수 있는 방법을 제안한다. 여기서 제안된 감성처리 모델은 감성 데이터 해석과 처리에 널리 활용될 수 있을 것이다.

AE-Artificial Emotion

  • Xuyan, Tu;Liqun, Han
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.146-149
    • /
    • 2003
  • This paper proposes the concept of “Artificial Emotion”(AE). The goal of AE is simulation, extension and expansion of natural emotion, especially human emotion. The object of AE is machine emotion and emotion machine. The contents of AE are emotion recognition, emotion measurement, emotion understanding emotion representation emotion generation, emotion processing, emotion control and emotion communication. The methodology, technology, scientific significance and application value of artificial emotion are discussed

  • PDF

Emotion Detection Algorithm Using Frontal Face Image

  • Kim, Moon-Hwan;Joo, Young-Hoon;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.2373-2378
    • /
    • 2005
  • An emotion detection algorithm using frontal facial image is presented in this paper. The algorithm is composed of three main stages: image processing stage and facial feature extraction stage, and emotion detection stage. In image processing stage, the face region and facial component is extracted by using fuzzy color filter, virtual face model, and histogram analysis method. The features for emotion detection are extracted from facial component in facial feature extraction stage. In emotion detection stage, the fuzzy classifier is adopted to recognize emotion from extracted features. It is shown by experiment results that the proposed algorithm can detect emotion well.

  • PDF

The Effect of Emotional Certainty on Attitudes in Advertising

  • Bok, Sang Yong;Min, Dongwon
    • Asia Marketing Journal
    • /
    • 제14권4호
    • /
    • pp.57-75
    • /
    • 2013
  • It is a well-established theory that emotion is influential in cognitive processing. Extensive prior research on emotion has shown that emotional factors, such as affect, mood, and feeling, play as information indicating whether he or she has enough knowledge. Most of their findings focused on the effect of emotional valence (i.g., one's subjective positivity or negativity related with the emotion). Recently, several studies on emotion suggest that there is another dimension of emotion, which affects the type of cognitive processing. The studies argue that emotional certainty facilitates heuristic processing, whereas emotional uncertainty promotes systematic processing. Based on the findings, current study examines the effect of certainty on attitudes and recall. Specifically, the authors investigate the effect of certainty on how much effort individuals use to process advertising information and how certainty affects attitude formation toward the advertised product. The authors also focus on recall to clarify the working mechanism of certainty on attitudes, because recall performance reflects the depth of information processing. Based on previous findings, the authors hypothesize that uncertainty (vs. certainty) leads to more favorable attitudes as well as better recall, and conduct an experiment using a fictitious advertisement with 218 participants. The results confirm the predicted effects of certainty only on attitudes not recall. A possible explanation of this discrepancy between attitudes and recall lies in the measurement method, unaided recall. To rule out this possibility, the authors perform an additional analysis with the participants who recall any correct information of the target advertisement. The results show certainty has a negative effect on both attitudes and recall. A bootstrapping test reveals that recall mediates the effect of certainty on attitudes. This result confirms that certainty decreases elaboration, which in turn leads to less favorable attitudes relative to uncertainty. Additionally, our data shows the association among certainty, recall, and attitudes by showing the indirect effect of certainty on attitudes via recall. This research encourages practitioners in the field to emphasize that they should focus on target audiences' emotional certainty before they provide the persuasive message, by showing that uncertainty promotes effortful processing, which in turn leads to better memory and more favorable attitudes.

  • PDF

적은 양의 음성 및 텍스트 데이터를 활용한 멀티 모달 기반의 효율적인 감정 분류 기법 (Efficient Emotion Classification Method Based on Multimodal Approach Using Limited Speech and Text Data)

  • 신미르;신유현
    • 정보처리학회 논문지
    • /
    • 제13권4호
    • /
    • pp.174-180
    • /
    • 2024
  • 본 논문에서는 wav2vec 2.0과 KcELECTRA 모델을 활용하여 멀티모달 학습을 통한 감정 분류 방법을 탐색한다. 음성 데이터와 텍스트 데이터를 함께 활용하는 멀티모달 학습이 음성만을 활용하는 방법에 비해 감정 분류 성능을 유의미하게 향상시킬 수 있음이 알려져 있다. 본 연구는 자연어 처리 분야에서 우수한 성능을 보인 BERT 및 BERT 파생 모델들을 비교 분석하여 텍스트 데이터의 효과적인 특징 추출을 위한 최적의 모델을 선정하여 텍스트 처리 모델로 활용한다. 그 결과 KcELECTRA 모델이 감정 분류 작업에서 뛰어난 성능이 보임을 확인하였다. 또한, AI-Hub에 공개되어 있는 데이터 세트를 활용한 실험을 통해 텍스트 데이터를 함께 활용하면 음성 데이터만 사용할 때보다 더 적은 양의 데이터로도 더 우수한 성능을 달성할 수 있음을 발견하였다. 실험을 통해 KcELECTRA 모델을 활용한 경우가 정확도 96.57%로 가장 우수한 성능을 보였다. 이는 멀티모달 학습이 감정 분류와 같은 복잡한 자연어 처리 작업에서 의미 있는 성능 개선을 제공할 수 있음을 보여준다.

Emotion Recognition based on Multiple Modalities

  • Kim, Dong-Ju;Lee, Hyeon-Gu;Hong, Kwang-Seok
    • 융합신호처리학회논문지
    • /
    • 제12권4호
    • /
    • pp.228-236
    • /
    • 2011
  • Emotion recognition plays an important role in the research area of human-computer interaction, and it allows a more natural and more human-like communication between humans and computer. Most of previous work on emotion recognition focused on extracting emotions from face, speech or EEG information separately. Therefore, a novel approach is presented in this paper, including face, speech and EEG, to recognize the human emotion. The individual matching scores obtained from face, speech, and EEG are combined using a weighted-summation operation, and the fused-score is utilized to classify the human emotion. In the experiment results, the proposed approach gives an improvement of more than 18.64% when compared to the most successful unimodal approach, and also provides better performance compared to approaches integrating two modalities each other. From these results, we confirmed that the proposed approach achieved a significant performance improvement and the proposed method was very effective.

Emotion-Based Control

  • Ko, Sung-Bum;Lim, Gi-Young
    • 한국감성과학회:학술대회논문집
    • /
    • 한국감성과학회 2000년도 춘계 학술대회 및 국제 감성공학 심포지움 논문집 Proceeding of the 2000 Spring Conference of KOSES and International Sensibility Ergonomics Symposium
    • /
    • pp.306-311
    • /
    • 2000
  • We, Human beings, use both powers of reason and emotion simultaneously, which surely help us to obtain flexible adaptability against the dynamic environment. We assert that this principle can be applied into the general system. That is, it would be possible to improve the adaptability by covering a digital oriented information processing system with analog oriented emotion layer. In this paper, we proposed a vertical slicing model with an emotion layer in it. And we showed that the emotion-based control allows us to improve the adaptability of a system at least under some conditions.

  • PDF

암묵적/외현적 과제에서 나타난 정신병질특성집단의 얼굴 정서 처리: 사건관련전위 연구 (Exploring facial emotion processing in individuals with psychopathic traits during the implicit/explicit tasks: An ERP study)

  • 이예지;김영윤
    • 한국심리학회지:법
    • /
    • 제12권2호
    • /
    • pp.99-120
    • /
    • 2021
  • 본 연구는 정신병질 특성을 가지고 있는 집단이 정서적 얼굴을 처리하는 데 있어서 어떤 차이가 있는지 알아보았다. 정신병질적 성격 검사 개정판에 따라 15명의 정신병질 특성집단과 15명의 통제집단을 선별하였다. 실험 참여자들은 화남, 공포, 슬픔으로 구성된 부정적 얼굴과 무표정인 중성적 얼굴이 자극으로 사용된 과제를 수행하였다. 피험자가 자극의 성별을 판단하는 암묵적 과제와 정서를 판단하는 외현적 과제를 수행하는 동안 사건관련전위가 측정되었다. 정서처리과정에서 나타나는 정신병질 특성집단과 통제집단의 차이를 조사하기 위해 LPP(late positive potentials) 진폭을 분석하였다. 암묵적 과제에서는 두 집단 간 유의미한 차이가 나타나지 않았다. 그러나 외현적 과제에서는 집단과 정서 간 유의미한 상호작용이 전두중심영역에서 나타났다. 부정적 얼굴과 중성적 얼굴에서 비슷한 LPP 진폭을 보인 통제집단과 다르게 정신병질 특성집단은 부정적 얼굴에 비해 중성적 얼굴에서 더 큰 진폭을 보였다. 이러한 결과는 정신병질 특성집단이 가지는 정서 처리의 비정상성을 반영한다고 볼 수 있다.

감정 인지를 위한 음성 및 텍스트 데이터 퓨전: 다중 모달 딥 러닝 접근법 (Speech and Textual Data Fusion for Emotion Detection: A Multimodal Deep Learning Approach)

  • 에드워드 카야디;송미화
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 추계학술발표대회
    • /
    • pp.526-527
    • /
    • 2023
  • Speech emotion recognition(SER) is one of the interesting topics in the machine learning field. By developing multi-modal speech emotion recognition system, we can get numerous benefits. This paper explain about fusing BERT as the text recognizer and CNN as the speech recognizer to built a multi-modal SER system.