• 제목/요약/키워드: Angry

검색결과 160건 처리시간 0.023초

차량 시뮬레이터를 이용한 운전행동 연구(운전분노 및 교통정체를 중심으로) (A study of Drivers' Behaviors using a Driving Simulator(with Special Reference of Driving Anger and Traffic Congestion)

  • 송혜수;신용균;강수철
    • 대한교통학회지
    • /
    • 제23권2호
    • /
    • pp.61-74
    • /
    • 2005
  • 본 연구는 운전분노와 교통정체가 운전행동에 미치는 영향을 조사한 것이다. 운전분노란 운전 중 경험하는 분노로서 개인적 특질성향이다. 이 개인전 성향은 운전 중 도발적이고 도전적인 상황에 부딪혔을 때 난폭운전으로 표출된다. 그런데 운전분노는 다른 성격특질처럼 개인차가 있어서 운전분노 수준에 따라 교통상황에서 느끼는 분노의 정도는 차이가 나고, 이 차이는 난폭운전의 차이로 나타날 가능성이 높다. 본 연구에서는 차량 시뮬레이터인 RTSA-DS를 이용하여 세 가지 교통상황(소통원활, 주행차로정체 및 선행차량의 서행으로 인한 진행방해 상황)을 가상현실 교통상황으로 제시하고, 운전분노 수준에 따른 운전행동을 조사 비교하였다. 그 결과 운전분노 수준이 높은 운전자가 낮은 운전자에 비해 주행차로 정체상황에서 빠른 속도로 주행하였으며, 주행차로 정체상황에서 정체를 피하기 위해 차로변경을 시도하였고, 이 과정에서 충돌사고의 개입율이 높은 것으로 나타났다. 이 결과는 운전분노 수준이 높은 운전자가 정체상황에서 난폭운전과 위험운전을 감행한다는 이전의 연구결과와 일치한다. 그러나 운전 분노 수준이 높은 집단이 주로 20대 운전자로 구성되었고, 운전분노와 연령간의 상관이 높은 점을 감안할 때 본 연구의 결과 적용 시 주의가 필요하다.

Attentional Bias to Emotional Stimuli and Effects of Anxiety on the Bias in Neurotypical Adults and Adolescents

  • Mihee Kim;Jejoong Kim;So-Yeon Kim
    • 감성과학
    • /
    • 제25권4호
    • /
    • pp.107-118
    • /
    • 2022
  • Human can rapidly detect and deal with dangerous elements in their environment, and they generally manifest as attentional bias toward threat. Past studies have reported that this attentional bias is affected by anxiety level. Other studies, however, have argued that children and adolescents show attentional bias to threatening stimuli, regardless of their anxiety levels. Few studies directly have compared the two age groups in terms of attentional bias to threat, and furthermore, most previous studies have focused on attentional capture and the early stages of attention, without investigating further attentional holding by the stimuli. In this study, we investigated both attentional bias patterns (attentional capture and holding) with respect to negative emotional stimulus in neurotypical adults and adolescents. The effects of anxiety level on attentional bias were also examined. The results obtained for adult participants showed that abrupt onset of a distractor delayed attentional capture to the target, regardless of distractor type (angry or neutral faces), while it had no effect on attention holding. In adolescents, on the other hand, only the angry face distractor resulted in longer reaction time for detecting a target. Regarding anxiety, state anxiety revealed a significant positive correlation with attentional capture to a face distractor in adult participants but not in adolescents. Overall, this is the first study to investigate developmental tendencies of attentional bias to negative facial emotion in both adults and adolescents, providing novel evidence on attentional bias to threats at different ages. Our results can be applied to understanding the attentional mechanisms in people with emotion-related developmental disorders, as well as typical development.

Statistical Speech Feature Selection for Emotion Recognition

  • Kwon Oh-Wook;Chan Kwokleung;Lee Te-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • 제24권4E호
    • /
    • pp.144-151
    • /
    • 2005
  • We evaluate the performance of emotion recognition via speech signals when a plain speaker talks to an entertainment robot. For each frame of a speech utterance, we extract the frame-based features: pitch, energy, formant, band energies, mel frequency cepstral coefficients (MFCCs), and velocity/acceleration of pitch and MFCCs. For discriminative classifiers, a fixed-length utterance-based feature vector is computed from the statistics of the frame-based features. Using a speaker-independent database, we evaluate the performance of two promising classifiers: support vector machine (SVM) and hidden Markov model (HMM). For angry/bored/happy/neutral/sad emotion classification, the SVM and HMM classifiers yield $42.3\%\;and\;40.8\%$ accuracy, respectively. We show that the accuracy is significant compared to the performance by foreign human listeners.

감정기반 정보 검색시스템에 관한 연구 (A Study on Emotion based Information Retrieval System)

  • 김명관;박영택
    • 한국문헌정보학회지
    • /
    • 제32권4호
    • /
    • pp.105-115
    • /
    • 1998
  • 인터넷의 확산과 더불어 엄청난 사용자의 증가는 인터넷을 단순히 정보 검색의 대상으로만 삼는 것이 아니라 일반인들의 여가 문화를 즐기는 장이 되어가고 있다. 이와 같은 요구로 감정기반 문서 검색 및 분류 시스템을 제안한다. 이 시스템을 ECRAS라고 부른다. 감정 성분 추출은 로젯의 시소러스와 워드넷을 통해 이루어졌다. 감정 성분을 추출한 문서는 k-NN 기법을 기반으로 검색을 수행한다.

  • PDF

감성적 인간 로봇 상호작용을 위한 음성감정 인식 (Speech emotion recognition for affective human robot interaction)

  • 장광동;권오욱
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2006년도 학술대회 1부
    • /
    • pp.555-558
    • /
    • 2006
  • 감정을 포함하고 있는 음성은 청자로 하여금 화자의 심리상태를 파악할 수 있게 하는 요소 중에 하나이다. 음성신호에 포함되어 있는 감정을 인식하여 사람과 로봇과의 원활한 감성적 상호작용을 위하여 특징을 추출하고 감정을 분류한 방법을 제시한다. 음성신호로부터 음향정보 및 운율정보인 기본 특징들을 추출하고 이로부터 계산된 통계치를 갖는 특징벡터를 입력으로 support vector machine (SVM) 기반의 패턴분류기를 사용하여 6가지의 감정- 화남(angry), 지루함(bored), 기쁨(happy), 중립(neutral), 슬픔(sad) 그리고 놀람(surprised)으로 분류한다. SVM에 의한 인식실험을 한 경우 51.4%의 인식률을 보였고 사람의 판단에 의한 경우는 60.4%의 인식률을 보였다. 또한 화자가 판단한 감정 데이터베이스의 감정들을 다수의 청자가 판단한 감정 상태로 변경한 입력을 SVM에 의해서 감정을 분류한 결과가 51.2% 정확도로 감정인식하기 위해 사용한 기본 특징들이 유효함을 알 수 있다.

  • PDF

모의 지능로봇에서의 음성 감정인식 (Speech Emotion Recognition on a Simulated Intelligent Robot)

  • 장광동;김남;권오욱
    • 대한음성학회지:말소리
    • /
    • 제56호
    • /
    • pp.173-183
    • /
    • 2005
  • We propose a speech emotion recognition method for affective human-robot interface. In the Proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes Pitch, jitter, duration, and rate of speech. Finally a pattern classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5 different directions. Experimental results show that the proposed method yields $48\%$ classification accuracy while human classifiers give $71\%$ accuracy.

  • PDF

운율 특성 벡터와 가우시안 혼합 모델을 이용한 감정인식 (Emotion Recognition using Prosodic Feature Vector and Gaussian Mixture Model)

  • 곽현석;김수현;곽윤근
    • 한국소음진동공학회:학술대회논문집
    • /
    • 한국소음진동공학회 2002년도 추계학술대회논문집
    • /
    • pp.762-766
    • /
    • 2002
  • This paper describes the emotion recognition algorithm using HMM(Hidden Markov Model) method. The relation between the mechanic system and the human has just been unilateral so far. This is the why people don't want to get familiar with multi-service robots of today. If the function of the emotion recognition is granted to the robot system, the concept of the mechanic part will be changed a lot. Pitch and Energy extracted from the human speech are good and important factors to classify the each emotion (neutral, happy, sad and angry etc.), which are called prosodic features. HMM is the powerful and effective theory among several methods to construct the statistical model with characteristic vector which is made up with the mixture of prosodic features

  • PDF

특징 선택과 융합 방법을 이용한 음성 감정 인식 (Speech Emotion Recognition using Feature Selection and Fusion Method)

  • 김원구
    • 전기학회논문지
    • /
    • 제66권8호
    • /
    • pp.1265-1271
    • /
    • 2017
  • In this paper, the speech parameter fusion method is studied to improve the performance of the conventional emotion recognition system. For this purpose, the combination of the parameters that show the best performance by combining the cepstrum parameters and the various pitch parameters used in the conventional emotion recognition system are selected. Various pitch parameters were generated using numerical and statistical methods using pitch of speech. Performance evaluation was performed on the emotion recognition system using Gaussian mixture model(GMM) to select the pitch parameters that showed the best performance in combination with cepstrum parameters. As a parameter selection method, sequential feature selection method was used. In the experiment to distinguish the four emotions of normal, joy, sadness and angry, fifteen of the total 56 pitch parameters were selected and showed the best recognition performance when fused with cepstrum and delta cepstrum coefficients. This is a 48.9% reduction in the error of emotion recognition system using only pitch parameters.

보호관찰 청소년과 일반 청소년의 도덕적 정서 (Comparison of Moral Emotions in Juvenile Offenders on Probation with Non-offenders)

  • 이희정;이성칠
    • 아동학회지
    • /
    • 제26권2호
    • /
    • pp.107-120
    • /
    • 2005
  • Three types of socio-moral transgression events were used to test the moral emotions and attributions of 30 juvenile offenders on probation with a comparison group of 30 non-offenders. Data were analyzed by chi-square. Differences between juvenile offenders on probation and non-offenders were that juvenile offenders expected victimizers would feel happier and less guilty following such acts of victimization as physical harm, theft, and lying than the comparison group. Non-offenders were more likely than offenders to feel that victims would feel angry and upset. Juvenile offenders gave more variable and less adaptive emotional responses. Offenders provided victimization and emotional distance attributions, but the comparison group provided moral attributions or causal-dependent attributions such as fairness and justice.

  • PDF

음성신호기반의 감정분석을 위한 특징벡터 선택 (Discriminative Feature Vector Selection for Emotion Classification Based on Speech)

  • 최하나;변성우;이석필
    • 전기학회논문지
    • /
    • 제64권9호
    • /
    • pp.1363-1368
    • /
    • 2015
  • Recently, computer form were smaller than before because of computing technique's development and many wearable device are formed. So, computer's cognition of human emotion has importantly considered, thus researches on analyzing the state of emotion are increasing. Human voice includes many information of human emotion. This paper proposes a discriminative feature vector selection for emotion classification based on speech. For this, we extract some feature vectors like Pitch, MFCC, LPC, LPCC from voice signals are divided into four emotion parts on happy, normal, sad, angry and compare a separability of the extracted feature vectors using Bhattacharyya distance. So more effective feature vectors are recommended for emotion classification.