• Title/Summary/Keyword: 감정 표현구

Search Result 22, Processing Time 0.037 seconds

Combining Sentimental Expression-level and Sentence-level Classifiers to Improve Subjective Sentence Classification (감정 표현구 단위 분류기와 문장 단위 분류기의 결합을 통한 주관적 문장 분류의 성능 향상)

  • Kang, In-Ho
    • The KIPS Transactions:PartB
    • /
    • v.14B no.7
    • /
    • pp.559-566
    • /
    • 2007
  • Subjective sentences express opinions, emotions, evaluations and other subjective ideas relevant to products or events. These expressions sometimes can be seen in only part of a sentence, thus extracting features from a full-sentence can degrade the performance of subjective-sentence-classification. This paper presents a method for improving the performance of a subjectivity classifier by combining two classifiers generated from the different representations of an input sentence. One representation is a sentimental phrase that represents an automatically identified subjective expression or objective expression and the other representation is a full-sentence. Each representation is used to extract modified n-grams that are composed of a word and its contextual words' polarity information. The best performance, 79.7% accuracy, 2.5% improvement, was obtained when the phrase-level classifier and the sentence-level classifier were merged.

Development of Vision based Emotion Recognition Robot (비전 기반의 감정인식 로봇 개발)

  • Park, Sang-Sung;Kim, Jung-Nyun;An, Dong-Kyu;Kim, Jae-Yeon;Jang, Dong-Sik
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.670-672
    • /
    • 2005
  • 본 논문은 비전을 기반으로 한 감정인식 로봇에 관한 논문이다. 피부스킨칼라와 얼굴의 기하학적 정보를 이용한 얼굴검출과 감정인식 알고리즘을 제안하고, 개발한 로봇 시스템을 설명한다. 얼굴 검출은 RGB 칼라 공간을 CIElab칼라 공간으로 변환하여, 피부스킨 후보영역을 추출하고, Face Filter로 얼굴의 기하학적 상관관계를 통하여 얼굴을 검출한다. 기하학적인 특징을 이용하여 눈, 코, 입의 위치를 판별하여 표정 인식의 기본 데이터로 활용한다. 눈썹과 입의 영역에 감정 인식 윈도우를 적용하여, 윈도우 내에서의 픽셀값의 변화와 크기의 변화로 감정인식의 특징 칼을 추출한다. 추출된 값은 실험에 의해서 미리 구해진 샘플과 비교를 통해 강정을 표현하고, 표현된 감정은 Serial Communication을 통하여 로봇에 전달되고, 감정 데이터를 받은 얼굴에 장착되어 있는 모터를 통해 표정을 표현한다.

  • PDF

A study about the aspect of translation on 'Hu(怖)' in novel 『Kokoro』 - Focusing on novels translated in Korean and English - (소설 『こころ』에 나타난 감정표현 '포(怖)'에 관한 번역 양상 - 한국어 번역 작품과 영어 번역 작품을 중심으로 -)

  • Yang, Jung-soon
    • Cross-Cultural Studies
    • /
    • v.53
    • /
    • pp.131-161
    • /
    • 2018
  • Emotional expressions are expressions that show the internal condition of mind or consciousness. Types of emotional expressions include vocabulary that describes emotion, the composition of sentences that expresses emotion such as an exclamatory sentence and rhetorical question, expressions of interjection, appellation, causative, passive, adverbs of attitude for an idea, and a style of writing. This study focuses on vocabulary that describes emotion and analyzes the aspect of translation when emotional expressions of 'Hu(怖)' is shown on "Kokoro". The aspect of translation was analyzed by three categories as follows; a part of speech, handling of subjects, and classification of meanings. As a result, the aspect of translation for expressions of Hu(怖)' showed that they were translated to vocabulary as they were suggested in the dictionary in some cases. However, they were not always translated as they were suggested in the dictionary. Vocabulary that described the emotion of 'Hu(怖)' in Japanese sentences were mostly translated to their corresponding parts of speech in Korean. Some adverbs needed to add 'verbs' when they were translated. Also, different vocabulary was added or used to maximize emotion. However, the correspondence of a part of speech in English was different from Korean. Examples of Japanese sentences that expressed 'Hu(怖)' by verbs were translated to expression of participles for passive verbs such as 'fear', 'dread', 'worry', and 'terrify' in many cases. Also, idioms were translated with focus on the function of sentences rather than the form of sentences. Examples, what was expressed in adverbs did not accompany verbs of 'Hu (怖)'. Instead, it was translated to the expression of participles for passive verbs and adjectives such as 'dread', 'worry', and 'terrify' in many cases. The main agents of emotion were shown in the first person and the third person in simple sentences. The translation on emotional expressions when a main agent was the first person showed that the fundamental word order of Japanese was translated as it was in Korean. However, adverbs of time and adverbs of degree tended to be added. Also, the first person as the main agent of emotion was positioned at the place of subject when it was translated in English. However, things or the cause of events were positioned at the place of subject in some cases to show the degree of 'Hu(怖)' which the main agent experienced. The expression of conjecture and supposition or a certain visual and auditory basis was added to translate the expression of emotion when the main agent of emotion was the third person. Simple sentences without a main agent of emotion showed that their subjects could be omitted even if they were essential components because they could be known through context in Korean. These omitted subjects were found and translated in English. Those subjects were not necessarily humans who were the main agents of emotion. They could be things or causes of events that specified the expression of emotion.

English Title - A Study of Emotional Dimension for Mixed Feelings (복합적 감정(mixed feelings)에 대한 감정차원 연구)

  • Han, Eui-Hwan;Cha, Hyung-Tai
    • Science of Emotion and Sensibility
    • /
    • v.16 no.4
    • /
    • pp.469-480
    • /
    • 2013
  • In this paper, we propose new method to reduce variance and express mixed feelings in Russell's emotional dimension(A Circumplex model). A Circumplex model shows mean and variance of emotions(joy, sad, happy, enjoy et. al.) in PAD(Pleasure, Arousal, Dominace, et. al.) dimension using self-diagnostic method(SAM: Self-Assessment-Manikin). But other researchers consistently insisted that Russell's model had two problems. First, data(emotional words) gathered by Russell's method have too big variance. So, it is difficult to separate valid value. Second, Russell's model can not properly represent mixed feelings because it has structural problem(It has a single Pleasure dimension). In order to solve these problems, we change survey methods, so that we reduce value of variance. And then we conduct survey(which can induce mixed feelings) to prove Positive/Negative(Pleasure) part in emotion and confirm that Russell's model can be used to express mixed feelings. Using this method, we can obtain high reliability and accuracy of data and Russell's model can be applied in many other fields such as bio-signal, mixed feelings, realistic broadcasting, et. al.

Emotion Prediction System using Movie Script and Cinematography (영화 시나리오와 영화촬영기법을 이용한 감정 예측 시스템)

  • Kim, Jinsu
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.12
    • /
    • pp.33-38
    • /
    • 2018
  • Recently, we are trying to predict the emotion from various information and to convey the emotion information that the supervisor wants to inform the audience. In addition, audiences intend to understand the flow of emotions through various information of non-dialogue parts, such as cinematography, scene background, background sound and so on. In this paper, we propose to extract emotions by mixing not only the context of scripts but also the cinematography information such as color, background sound, composition, arrangement and so on. In other words, we propose an emotional prediction system that learns and distinguishes various emotional expression techniques into dialogue and non-dialogue regions, contributes to the completeness of the movie, and quickly applies them to new changes. The precision of the proposed system is improved by about 5.1% and 0.4%, and the recall is improved by about 4.3% and 1.6%, respectively, when compared with the modified n-gram and morphological analysis.

박동철의 사진강좌

  • 한국광학기기협회
    • The Optical Journal
    • /
    • s.126
    • /
    • pp.68-71
    • /
    • 2010
  • 구성은 구도보다 넓은 의미로 쓰이며, 서로 구별되어져야 하지만 실제 사진예술에는 이 두가지가 상관하면서 주제를 표현하게 된다. 또한 구성이나 구도 모두 인간의 균형감각과 조화로움을 느끼는 시각에서 출발하게 되는데, 이는 어느 누구에게나 보편적인 감정을 유발한다. 이러한 기술들은 사진 기법들의 정확한 이해를 기반으로 응용하고 활용하면서 내 것이 되고 작품의 가치가 결정되는 것이다. 이번호에서는 두 번째 시간으로 사진의 구성과 구도에 대해서 알아보겠다.

  • PDF

Analysis of Voice Color Similarity for the development of HMM Based Emotional Text to Speech Synthesis (HMM 기반 감정 음성 합성기 개발을 위한 감정 음성 데이터의 음색 유사도 분석)

  • Min, So-Yeon;Na, Deok-Su
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.9
    • /
    • pp.5763-5768
    • /
    • 2014
  • Maintaining a voice color is important when compounding both the normal voice because an emotion is not expressed with various emotional voices in a single synthesizer. When a synthesizer is developed using the recording data of too many expressed emotions, a voice color cannot be maintained and each synthetic speech is can be heard like the voice of different speakers. In this paper, the speech data was recorded and the change in the voice color was analyzed to develop an emotional HMM-based speech synthesizer. To realize a speech synthesizer, a voice was recorded, and a database was built. On the other hand, a recording process is very important, particularly when realizing an emotional speech synthesizer. Monitoring is needed because it is quite difficult to define emotion and maintain a particular level. In the realized synthesizer, a normal voice and three emotional voice (Happiness, Sadness, Anger) were used, and each emotional voice consists of two levels, High/Low. To analyze the voice color of the normal voice and emotional voice, the average spectrum, which was the measured accumulated spectrum of vowels, was used and the F1(first formant) calculated by the average spectrum was compared. The voice similarity of Low-level emotional data was higher than High-level emotional data, and the proposed method can be monitored by the change in voice similarity.

Asynchronous development of young gifted children by parents′ perception (부모의 지각에 따른 유아영재의 비동시적 발달특성)

  • 윤형주;윤여홍
    • Journal of Gifted/Talented Education
    • /
    • v.13 no.1
    • /
    • pp.65-80
    • /
    • 2003
  • The purpose of this study was to investigate the Asynchronous development of young gifted children by parents' perception. Total 3 groups of 145 subjects from age 30 months to 6 years 10 months old young gifted children's parents participated. The major findings were as follows: (1) The mean developmental characteristics was at a high average level. The developmental subscales tended to be high. The level of verbal understanding/expression tended to be high. It reared as followed; intellectual capacity, emotional maturity, visual-motor coordination, morality, self-behavior control, emotion control, physical development, social development, peer relationship, leadership ability. (2) There were significant differences between intellectual capacity, verbal understanding /expression and physical, social development, self-behavior control, emotion control. There were significant differences between physical development, self-behavior control and emotion control as children got lower. There were significant differences between verbal understanding/expression and visual-motor coordination as children got older. There were significant differences between social development, peer relationship and self-behavior control, emotion control as children got older. Also, there were significant differences between leadership ability and self-behavior control, emotion control as children got older. There were significant differences between morality and self-behavior control as children got older. These findings suggested that young gifted children were in the special needs because of the developmental differences.

A Study on the Performance of Music Retrieval Based on the Emotion Recognition (감정 인식을 통한 음악 검색 성능 분석)

  • Seo, Jin Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.3
    • /
    • pp.247-255
    • /
    • 2015
  • This paper presents a study on the performance of the music search based on the automatically recognized music-emotion labels. As in the other media data, such as speech, image, and video, a song can evoke certain emotions to the listeners. When people look for songs to listen, the emotions, evoked by songs, could be important points to consider. However; very little study has been done on the performance of the music-emotion labels to the music search. In this paper, we utilize the three axes of human music perception (valence, activity, tension) and the five basic emotion labels (happiness, sadness, tenderness, anger, fear) in measuring music similarity for music search. Experiments were conducted on both genre and singer datasets. The search accuracy of the proposed emotion-based music search was up to 75 % of that of the conventional feature-based music search. By combining the proposed emotion-based method with the feature-based method, we achieved up to 14 % improvement of search accuracy.

Enhancing Empathic Reasoning of Large Language Models Based on Psychotherapy Models for AI-assisted Social Support (인공지능 기반 사회적 지지를 위한 대형언어모형의 공감적 추론 향상: 심리치료 모형을 중심으로)

  • Yoon Kyung Lee;Inju Lee;Minjung Shin;Seoyeon Bae;Sowon Hahn
    • Korean Journal of Cognitive Science
    • /
    • v.35 no.1
    • /
    • pp.23-48
    • /
    • 2024
  • Building human-aligned artificial intelligence (AI) for social support remains challenging despite the advancement of Large Language Models. We present a novel method, the Chain of Empathy (CoE) prompting, that utilizes insights from psychotherapy to induce LLMs to reason about human emotional states. This method is inspired by various psychotherapy approaches-Cognitive-Behavioral Therapy (CBT), Dialectical Behavior Therapy (DBT), Person-Centered Therapy (PCT), and Reality Therapy (RT)-each leading to different patterns of interpreting clients' mental states. LLMs without CoE reasoning generated predominantly exploratory responses. However, when LLMs used CoE reasoning, we found a more comprehensive range of empathic responses aligned with each psychotherapy model's different reasoning patterns. For empathic expression classification, the CBT-based CoE resulted in the most balanced classification of empathic expression labels and the text generation of empathic responses. However, regarding emotion reasoning, other approaches like DBT and PCT showed higher performance in emotion reaction classification. We further conducted qualitative analysis and alignment scoring of each prompt-generated output. The findings underscore the importance of understanding the emotional context and how it affects human-AI communication. Our research contributes to understanding how psychotherapy models can be incorporated into LLMs, facilitating the development of context-aware, safe, and empathically responsive AI.