• Title/Summary/Keyword: Emotion Capture

Search Result 33, Processing Time 0.021 seconds

Development of Future Soldier Battle Jacket Design based on the Measurement by Motion Capture System (동작에 따른 체표변화 측정결과를 이용한 미래병사 전투복 설계안 개발 -Motion Capture System 계측법을 중심으로)

  • Park, Seon-Hyeong;Yang, Jin-Hui;Jeong, Gi-Sam;Chae, Jae-Uk;Kim, Hyeon-Jun;Choe, Ui-Jung;Lee, Ju-Hyeon
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.11a
    • /
    • pp.206-209
    • /
    • 2009
  • 미래의 정보전의 환경 하에서 병사들은 전투 상황 중에 각종 정보를 획득하고 이에 신속히 대응하는 하나의 체계(system)로서 전투 임무를 담당하게 될 것이며, 미래의 병사들은 각종 첨단 장비들을 신체에 부착하고 전투에 임하게 될 것이다. 첨단 디지털 기기들을 추가로 휴대하고도 기동성과 움직임에 저해 받지 않고, 오히려 정보력 상승으로 더 나은 전투력을 갖추기 위해 장치 구성품들을 의복에 내장시킨 '스마트 전투복'을 입을 것으로 생각된다. 본 연구는 정확한 모션 캡쳐 솔루션을 지원하는 VICON사의 Motion Capture System을 이용하여 동작에 따른 체표변화를 측정하고 해석하여 적절한 신체 부위에 기기를 배치함으로써 미래 전장에 적합한 스마트 전투복 디자인을 개발하는 것을 목표로 한다.

  • PDF

Attentional Bias to Emotional Stimuli and Effects of Anxiety on the Bias in Neurotypical Adults and Adolescents

  • Mihee Kim;Jejoong Kim;So-Yeon Kim
    • Science of Emotion and Sensibility
    • /
    • v.25 no.4
    • /
    • pp.107-118
    • /
    • 2022
  • Human can rapidly detect and deal with dangerous elements in their environment, and they generally manifest as attentional bias toward threat. Past studies have reported that this attentional bias is affected by anxiety level. Other studies, however, have argued that children and adolescents show attentional bias to threatening stimuli, regardless of their anxiety levels. Few studies directly have compared the two age groups in terms of attentional bias to threat, and furthermore, most previous studies have focused on attentional capture and the early stages of attention, without investigating further attentional holding by the stimuli. In this study, we investigated both attentional bias patterns (attentional capture and holding) with respect to negative emotional stimulus in neurotypical adults and adolescents. The effects of anxiety level on attentional bias were also examined. The results obtained for adult participants showed that abrupt onset of a distractor delayed attentional capture to the target, regardless of distractor type (angry or neutral faces), while it had no effect on attention holding. In adolescents, on the other hand, only the angry face distractor resulted in longer reaction time for detecting a target. Regarding anxiety, state anxiety revealed a significant positive correlation with attentional capture to a face distractor in adult participants but not in adolescents. Overall, this is the first study to investigate developmental tendencies of attentional bias to negative facial emotion in both adults and adolescents, providing novel evidence on attentional bias to threats at different ages. Our results can be applied to understanding the attentional mechanisms in people with emotion-related developmental disorders, as well as typical development.

A Study on Emotion Recognition of Chunk-Based Time Series Speech (청크 기반 시계열 음성의 감정 인식 연구)

  • Hyun-Sam Shin;Jun-Ki Hong;Sung-Chan Hong
    • Journal of Internet Computing and Services
    • /
    • v.24 no.2
    • /
    • pp.11-18
    • /
    • 2023
  • Recently, in the field of Speech Emotion Recognition (SER), many studies have been conducted to improve accuracy using voice features and modeling. In addition to modeling studies to improve the accuracy of existing voice emotion recognition, various studies using voice features are being conducted. This paper, voice files are separated by time interval in a time series method, focusing on the fact that voice emotions are related to time flow. After voice file separation, we propose a model for classifying emotions of speech data by extracting speech features Mel, Chroma, zero-crossing rate (ZCR), root mean square (RMS), and mel-frequency cepstrum coefficients (MFCC) and applying them to a recurrent neural network model used for sequential data processing. As proposed method, voice features were extracted from all files using 'librosa' library and applied to neural network models. The experimental method compared and analyzed the performance of models of recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit (GRU) using the Interactive emotional dyadic motion capture Interactive Emotional Dyadic Motion Capture (IEMOCAP) english dataset.

Posture features and emotion predictive models for affective postures recognition (감정 자세 인식을 위한 자세특징과 감정예측 모델)

  • Kim, Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.12 no.6
    • /
    • pp.83-94
    • /
    • 2011
  • Main researching issue in affective computing is to give a machine the ability to recognize the emotion of a person and to react it properly. Efforts in that direction have mainly focused on facial and oral cues to get emotions. Postures have been recently considered as well. This paper aims to discriminate emotions posture by identifying and measuring the saliency of posture features that play a role in affective expression. To do so, affective postures from human subjects are first collected using a motion capture system, then emotional features in posture are described with spatial ones. Through standard statistical techniques, we verified that there is a statistically significant correlation between the emotion intended by the acting subjects, and the emotion perceived by the observers. Discriminant Analysis are used to build affective posture predictive models and to measure the saliency of the proposed set of posture features in discriminating between 6 basic emotional states. The evaluation of proposed features and models are performed using a correlation between actor-observer's postures set. Quantitative experimental results show that proposed set of features discriminates well between emotions, and also that built predictive models perform well.

Biped Animation Blending By 3D Studio MAX Script (맥스 스크립트를 이용한 바이페드 애니메이션 합성)

  • Choe, Hong-Seok
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2008.10a
    • /
    • pp.131-134
    • /
    • 2008
  • 오늘날 3D 캐릭터 애니메이션은 실사영화, 애니메이션, 게임, 광고 등 대다수의 영상물에서 쉽게 접할 수 있다. 캐릭터의 부드러운 움직임은 모션캡쳐(Motion Capture)나 숙련된 애니메이터의 키 프레임(Key Frame) 작업의 결과물일 것이다. 이런 작업들은 고가의 장비나 많은 인력을 요구하고 완성된 결과물은 수정하거나 효과를 주기가 힘들다. 본 연구에서는 3D Studio MAX Script를 이용한 삼차원 회전 값의 연산으로 바이페드(Biped)의 포즈나 애니메이션을 합성하고 보다 사실적인 합성을 위한 방법을 제시하고자 한다.

  • PDF

Jointly Image Topic and Emotion Detection using Multi-Modal Hierarchical Latent Dirichlet Allocation

  • Ding, Wanying;Zhu, Junhuan;Guo, Lifan;Hu, Xiaohua;Luo, Jiebo;Wang, Haohong
    • Journal of Multimedia Information System
    • /
    • v.1 no.1
    • /
    • pp.55-67
    • /
    • 2014
  • Image topic and emotion analysis is an important component of online image retrieval, which nowadays has become very popular in the widely growing social media community. However, due to the gaps between images and texts, there is very limited work in literature to detect one image's Topics and Emotions in a unified framework, although topics and emotions are two levels of semantics that often work together to comprehensively describe one image. In this work, a unified model, Joint Topic/Emotion Multi-Modal Hierarchical Latent Dirichlet Allocation (JTE-MMHLDA) model, which extends previous LDA, mmLDA, and JST model to capture topic and emotion information at the same time from heterogeneous data, is proposed. Specifically, a two level graphical structured model is built to realize sharing topics and emotions among the whole document collection. The experimental results on a Flickr dataset indicate that the proposed model efficiently discovers images' topics and emotions, and significantly outperform the text-only system by 4.4%, vision-only system by 18.1% in topic detection, and outperforms the text-only system by 7.1%, vision-only system by 39.7% in emotion detection.

  • PDF

Stylized Image Generation based on Music-image Synesthesia Emotional Style Transfer using CNN Network

  • Xing, Baixi;Dou, Jian;Huang, Qing;Si, Huahao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1464-1485
    • /
    • 2021
  • Emotional style of multimedia art works are abstract content information. This study aims to explore emotional style transfer method and find the possible way of matching music with appropriate images in respect to emotional style. DCNNs (Deep Convolutional Neural Networks) can capture style and provide emotional style transfer iterative solution for affective image generation. Here, we learn the image emotion features via DCNNs and map the affective style on the other images. We set image emotion feature as the style target in this style transfer problem, and held experiments to handle affective image generation of eight emotion categories, including dignified, dreaming, sad, vigorous, soothing, exciting, joyous, and graceful. A user study was conducted to test the synesthesia emotional image style transfer result with ground truth user perception triggered by the music-image pairs' stimuli. The transferred affective image result for music-image emotional synesthesia perception was proved effective according to user study result.

Emotion Image Retrieval through Query Emotion Descriptor and Relevance Feedback (질의 감성 표시자와 유사도 피드백을 이용한 감성 영상 검색)

  • Yoo Hun-Woo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.3
    • /
    • pp.141-152
    • /
    • 2005
  • A new emotion-based image retrieval method is proposed in this paper. Query emotion descriptors called query color code and query gray code are designed based on the human evaluation on 13 emotions('like', 'beautiful', 'natural', 'dynamic', 'warm', 'gay', 'cheerful', 'unstable', 'light' 'strong', 'gaudy' 'hard', 'heavy') when 30 random patterns with different color, intensity, and dot sizes are presented. For emotion image retrieval, once a query emotion is selected, associated query color code and query gray code are selected. Next, DB color code and DB gray code that capture color and, intensify and dot size are extracted in each database image and a matching process between two color codes and between two gray codes are peformed to retrieve relevant emotion images. Also, a new relevance feedback method is proposed. The method incorporates human intention in the retrieval process by dynamically updating weights of the query and DB color codes and weights of an intra query color code. For the experiments over 450 images, the number of positive images was higher than that of negative images at the initial query and increased according to the relevance feedback.

Bipeds Animation Blending By 3D Studio MAX Script (맥스 스크립트를 이용한 바이페드 애니메이션 합성)

  • Choi, Hong-Seok;Jeong, Jae-Wook
    • Science of Emotion and Sensibility
    • /
    • v.12 no.3
    • /
    • pp.259-266
    • /
    • 2009
  • Today, the 3D character animation is easily accessible from most of the film such as an actuality film, animation, games, or advertising. However, such a smooth movement of characters is a result obtained by Key Frame operation which skilled animators worked with data obtained through expensive equipment such Motion Capture for a long time. Therefore, to modify or to give other effects is not easy. In some cases, character's action made according to the personal feeling could be different with universal expectations of audiences, because it might be not appropriate to make regulations generalized between character's action by animater's design and emotional reaction of audience. In this research, it is aimed to show the way which is easily to blend and modify 2-3 Biped animation data by offering the operation tools of 3-D rotation using 3D Studio MAX Script. By this tool E.A.M., we can have various researches for quantities relation of between walking and emotional reaction.

  • PDF