• Title/Summary/Keyword: 인간 감정

Search Result 541, Processing Time 0.027 seconds

Development of Emotion Recognition and Expression module with Speech Signal for Entertainment Robot (엔터테인먼트 로봇을 위한 음성으로부터 감정 인식 및 표현 모듈 개발)

  • Mun, Byeong-Hyeon;Yang, Hyeon-Chang;Sim, Gwi-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.82-85
    • /
    • 2007
  • 현재 가정을 비롯한 여러 분야에서 서비스 로봇(청소 로봇, 애완용 로봇, 멀티미디어 로봇 둥)의 사용이 증가하고 있는 시장상황을 보이고 있다. 개인용 서비스 로봇은 인간 친화적 특성을 가져야 그 선호도가 높아질 수 있는데 이를 위해서 사용자의 감정 인식 및 표현 기술은 필수적인 요소이다. 사람들의 감정 인식을 위해 많은 연구자들은 음성, 사람의 얼굴 표정, 생체신호, 제스쳐를 통해서 사람들의 감정 인식을 하고 있다. 특히, 음성을 인식하고 적용하는 것에 관한 연구가 활발히 진행되고 있다. 본 논문은 감정 인식 시스템을 두 가지 방법으로 제안하였다. 현재 많이 개발 되어지고 있는 음성인식 모듈을 사용하여 단어별 감정을 분류하여 감정 표현 시스템에 적용하는 것과 마이크로폰을 통해 습득된 음성신호로부터 특정들을 검출하여 Bayesian Learning(BL)을 적용시켜 normal, happy, sad, surprise, anger 등 5가지의 감정 상태로 패턴 분류를 한 후 이것을 동적 감정 표현 알고리즘의 입력값으로 하여 dynamic emotion space에 사람의 감정을 표현할 수 있는 ARM 플랫폼 기반의 음성 인식 및 감정 표현 시스템 제안한 것이다.

  • PDF

Representation and Detection of Video Shot s Features for Emotional Events (감정에 관련된 비디오 셧의 특징 표현 및 검출)

  • Kang, Hang-Bong;Park, Hyun-Jae
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.53-62
    • /
    • 2004
  • The processing of emotional information is very important in Human-Computer Interaction (HCI). In particular, it is very important in video information processing to deal with a user's affection. To handle emotional information, it is necessary to represent meaningful features and detect them efficiently. Even though it is not an easy task to detect emotional events from low level features such as colour and motion, it is possible to detect them if we use statistical analysis like Linear Discriminant Analysis (LDA). In this paper, we propose a representation scheme for emotion-related features and a defection method. We experiment with extracted features from video to detect emotional events and obtain desirable results.

Research on Classification of Human Emotions Using EEG Signal (뇌파신호를 이용한 감정분류 연구)

  • Zubair, Muhammad;Kim, Jinsul;Yoon, Changwoo
    • Journal of Digital Contents Society
    • /
    • v.19 no.4
    • /
    • pp.821-827
    • /
    • 2018
  • Affective computing has gained increasing interest in the recent years with the development of potential applications in Human computer interaction (HCI) and healthcare. Although momentous research has been done on human emotion recognition, however, in comparison to speech and facial expression less attention has been paid to physiological signals. In this paper, Electroencephalogram (EEG) signals from different brain regions were investigated using modified wavelet energy features. For minimization of redundancy and maximization of relevancy among features, mRMR algorithm was deployed significantly. EEG recordings of a publically available "DEAP" database have been used to classify four classes of emotions with Multi class Support Vector Machine. The proposed approach shows significant performance compared to existing algorithms.

Analysis of Gait Characteristics of Walking in Various Emotion Status (다양한 감정 상태에서의 보행 특징 분석)

  • Dang, Van Chien;Tran, Trung Tin;Kim, Jong-Wook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.5
    • /
    • pp.477-481
    • /
    • 2014
  • Human has various types of emotions which affect speculation, judgement, activity, and the like at the moment. Specifically, walking is also affected by emotions, because one's emotion status can be easily inferred by his or her walking style. The present research on biped walking with humanoid robots is mainly focused on stable walking irrespective of ground condition. For effective human-robot interaction, however, walking pattern needs to be changed depending on the emotion status of the robot. This paper provides analysis and comparison of gait experiment data for the men and women in four representative emotion states, i.e., joy, sorrow, ease, and anger, which was acquired by a gait analysis system. The data and analysis results provided in this paper will be referenced to emotional biped walking of a humanoid robot.

Suggestion of Emotional Expression with Human Character in 3D Animation using Layering Method (레이어링을 사용한 3D 애니메이션 인간형 캐릭터의 감정 표현 방법 제안)

  • Kim, Joo-Chan;Suk, Hae-Jung
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.5
    • /
    • pp.1-17
    • /
    • 2015
  • In domestic game market, the video game market is getting smaller and also decreasing funds and high level developers. So we need the software that can help us to make more realistic and high quality contents by non-expert developer in poor environments. In this paper, we selected global studio's animations which were scored good evaluation by public and critics as a well-made emotional expression that can convey the emotion properly. We selected movements that express emotions from the animation scripts by using Ekman's 6 basic emotions and Greimas' dynamic predicate, and then we had analyzed and categorized with the data. We also analyzed the movements for which data we needed to create specific movements to express emotions by using 'Animation Layer' that used in Unity's blending process. And suggest concept of the program that to create the emotional expression movements by using those analyzed data.

Utilizing Korean Ending Boundary Tones for Accurately Recognizing Emotions in Utterances (발화 내 감정의 정밀한 인식을 위한 한국어 문미억양의 활용)

  • Jang In-Chang;Lee Tae-Seung;Park Mikyoung;Kim Tae-Soo;Jang Dong-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.6C
    • /
    • pp.505-511
    • /
    • 2005
  • Autonomic machines interacting with human should have capability to perceive the states of emotion and attitude through implicit messages for obtaining voluntary cooperation from their clients. Voice is the easiest and most natural way to exchange human messages. The automatic systems capable to understanding the states of emotion and attitude have utilized features based on pitch and energy of uttered sentences. Performance of the existing emotion recognition systems can be further improved withthe support of linguistic knowledge that specific tonal section in a sentence is related with the states of emotion and attitude. In this paper, we attempt to improve recognition rate of emotion by adopting such linguistic knowledge for Korean ending boundary tones into anautomatic system implemented using pitch-related features and multilayer perceptrons. From the results of an experiment over a Korean emotional speech database, the improvement of $4\%$ is confirmed.

Emotion Coding of Sijo Crying Cuckoo at the Empty Mountain (시조 「공산에 우는 접동」의 감정 코딩)

  • Park, Inkwa
    • The Journal of the Convergence on Culture Technology
    • /
    • v.5 no.1
    • /
    • pp.13-20
    • /
    • 2019
  • This study aims to study the codes that can code the Sijo's emotional codes into AI and use them in literature therapy. In this study, we conducted emotional coding of the Sijo Crying Cuckoo at the Empty Mountain. As a result, the Emotion Codon was able to indicate the state of sadness catharsis. This implanting of the Sijo's emotional codes into Emotion Codon is like implanting human emotions into AI. If the basic emotion codes are implanted in the Emotion Codon and induced of AI's self-learning, We think AI can combine various emotions that occur in the human body. AI can then replace human emotions, which can be useful in treating of human emotions. It is believed that continuing this study will induce human emotions to heal the mind and spirit.

Rendering of general paralyzed patient's emotion by using EEG (뇌파 신호를 이용한 전신마비환자의 감정표현)

  • Kim, Su-Jong;Kim, Young-Chol;Lee, Tae-Soo
    • Proceedings of the KIEE Conference
    • /
    • 2007.10a
    • /
    • pp.343-344
    • /
    • 2007
  • 본 논문은 의사표현이 어려운 전신마비환자의 뇌파(EEG)를 이용하여 긍정과 부정을 표현할 수 있는 방법에 대해서 소개한다. 더 나아가 인간의 감정에 따라 긍정과 부정을 민감하게 반응하는 뇌 영역을 분석하였다. 해당영역의 뇌파(EEG)변화를 측정하기 위해 컴퓨터 시스템과 접목시키는 목적도 포함하고 있다. 이를 위해서 미약한 뇌파를 증폭 시키는 전치 증폭기를 구현하였고 인공산물과 뇌파 주파수영역만을 통과시키는 아날로그 전자회로를 구현하였다. 또한 인간의 두뇌피질로부터 측정된 신호는 컴퓨터 시스템에 전송된다. 수신된 신호를 실시간 Fast Fourier Transform(FFT) 신호처리과정을 거쳐 뇌파의 주파수 영역을 분류하게 된다. 이때 분류된 뇌파를 바탕으로 인간의 긍정과 부정을 표현할 수 있는 방법을 제시한다.

  • PDF

3D Facial Modeling and Synthesis System for Realistic Facial Expression (자연스러운 표정 합성을 위한 3차원 얼굴 모델링 및 합성 시스템)

  • 심연숙;김선욱;한재현;변혜란;정창섭
    • Korean Journal of Cognitive Science
    • /
    • v.11 no.2
    • /
    • pp.1-10
    • /
    • 2000
  • Realistic facial animation research field which communicates with human and computer using face has increased recently. The human face is the part of the body we use to recognize individuals and the important communication channel that understand the inner states like emotion. To provide the intelligent interface. computer facial animation looks like human in talking and expressing himself. Facial modeling and animation research is focused on realistic facial animation recently. In this article, we suggest the method of facial modeling and animation for realistic facial synthesis. We can make a 3D facial model for arbitrary face by using generic facial model. For more correct and real face, we make the Korean Generic Facial Model. We can also manipulate facial synthesis based on the physical characteristics of real facial muscle and skin. Many application will be developed such as teleconferencing, education, movies etc.

  • PDF

Capacitive Skin Piloerection Sensors for Human Emotional State Cognition (인간의 감정변화 상태 인지를 위한 정전용량형 피부 입모근 수축 감지센서)

  • Kim, Jaemin;Seo, Dae Geon;Cho, Young-Ho
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.39 no.2
    • /
    • pp.147-152
    • /
    • 2015
  • We designed, fabricated, and tested the capacitive microsensors for skin piloerection monitoring. The performance of the skin piloerection monitoring sensor was characterized using the artificial bump, representing human skin goosebump; thus, resulting in the sensitivity of $-0.00252%/{\mu}m$ and the nonlinearity of 25.9 % for the artificial goosebump deformation in the range of $0{\sim}326{\mu}m$. We also verified two successive human skin piloerection having 3.5 s duration on the subject's dorsal forearms, thus resulting in the capacitance change of -6.2 fF and -9.2 fF compared to the initial condition, corresponding to the piloerection intensity of $145{\mu}m$ and $194{\mu}m$, respectively. It was demonstrated experimentally that the proposed sensor is capable to measure the human skin piloerection objectively and quantitatively, thereby suggesting the quantitative evaluation method of the qualitative human emotional state for cognitive human-machine interfaces applications.