• 제목/요약/키워드: Emotion Classification

검색결과 292건 처리시간 0.028초

한글 글꼴 추천시스템을 위한 크라우드 방식의 감성 속성 적용 및 분석 (Application and Analysis of Emotional Attributes using Crowdsourced Method for Hangul Font Recommendation System)

  • 김현영;임순범
    • 한국멀티미디어학회논문지
    • /
    • 제20권4호
    • /
    • pp.704-712
    • /
    • 2017
  • Various researches on content sensibility with the development of digital contents are under way. Emotional research on fonts is also underway in various fields. There is a requirement to use the content expressions in the same way as the content, and to use the font emotion and the textual sensibility of the text in harmony. But it is impossible to select a proper font emotion in Korea because each of more than 6,000 fonts has a certain emotion. In this paper, we analysed emotional classification attributes and constructed the Hangul font recommendation system. Also we verified the credibility and validity of the attributes themselves in order to apply to Korea Hangul fonts. After then, we tested whether general users can find a proper font in a commercial font set through this emotional recommendation system. As a result, when users want to express their emotions in sentences more visually, they can get a recommendation of a Hangul font having a desired emotion by utilizing font-based emotion attribute values collected through the crowdsourced method.

음성의 감성요소 추출을 통한 감성 인식 시스템 (The Emotion Recognition System through The Extraction of Emotional Components from Speech)

  • 박창현;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제10권9호
    • /
    • pp.763-770
    • /
    • 2004
  • The important issue of emotion recognition from speech is a feature extracting and pattern classification. Features should involve essential information for classifying the emotions. Feature selection is needed to decompose the components of speech and analyze the relation between features and emotions. Specially, a pitch of speech components includes much information for emotion. Accordingly, this paper searches the relation of emotion to features such as the sound loudness, pitch, etc. and classifies the emotions by using the statistic of the collecting data. This paper deals with the method of recognizing emotion from the sound. The most important emotional component of sound is a tone. Also, the inference ability of a brain takes part in the emotion recognition. This paper finds empirically the emotional components from the speech and experiment on the emotion recognition. This paper also proposes the recognition method using these emotional components and the transition probability.

감정 인식을 위한 음성의 특징 파라메터 비교 (The Comparison of Speech Feature Parameters for Emotion Recognition)

  • 김원구
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2004년도 춘계학술대회 학술발표 논문집 제14권 제1호
    • /
    • pp.470-473
    • /
    • 2004
  • In this paper, the comparison of speech feature parameters for emotion recognition is studied for emotion recognition using speech signal. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy. MFCC parameters and their derivatives with or without cepstral mean subfraction are also used to evaluate the performance of the conventional pattern matching algorithms. Pitch and energy Parameters were used as a Prosodic information and MFCC Parameters were used as phonetic information. In this paper, In the Experiments, the vector quantization based emotion recognition system is used for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy parameters. The vector quantization based emotion recognizer achieved recognition rates of 73.3% for the speaker and context independent classification.

  • PDF

얼굴 특징 변화에 따른 휴먼 감성 인식 (Human Emotion Recognition based on Variance of Facial Features)

  • 이용환;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제16권4호
    • /
    • pp.79-85
    • /
    • 2017
  • Understanding of human emotion has a high importance in interaction between human and machine communications systems. The most expressive and valuable way to extract and recognize the human's emotion is by facial expression analysis. This paper presents and implements an automatic extraction and recognition scheme of facial expression and emotion through still image. This method has three main steps to recognize the facial emotion: (1) Detection of facial areas with skin-color method and feature maps, (2) Creation of the Bezier curve on eyemap and mouthmap, and (3) Classification and distinguish the emotion of characteristic with Hausdorff distance. To estimate the performance of the implemented system, we evaluate a success-ratio with emotional face image database, which is commonly used in the field of facial analysis. The experimental result shows average 76.1% of success to classify and distinguish the facial expression and emotion.

  • PDF

Word2Vec과 LSTM을 활용한 이별 가사 감정 분류 (Parting Lyrics Emotion Classification using Word2Vec and LSTM)

  • 임명진;박원호;신주현
    • 스마트미디어저널
    • /
    • 제9권3호
    • /
    • pp.90-97
    • /
    • 2020
  • 인터넷과 스마트폰의 발달로 디지털 음원은 쉽게 접근이 가능해졌고 이에 따라 음악 검색 및 추천에 대한 관심이 높아지고 있다. 음악 추천 방법으로는 장르나 감정을 분류하기 위해 음정, 템포, 박자 등의 멜로디를 사용한 연구가 진행되고 있다. 하지만 음악에서 가사는 인간의 감정을 표현하는 수단 중의 하나로 역할 비중이 점점 높아지고 있기 때문에 가사를 기반으로 한 감정 분류 연구가 필요하다. 이에 본 논문에서는 가사를 기반으로 이별 감정을 세분화하기 위해 이별 가사의 감정을 분석한다. 이별 가사에 나타나는 단어 간 유사도를 Word2Vec 학습을 통해 벡터화하여 감정 사전을 구축 한 후 LSTM을 활용하여 가사를 학습시켜 유사한 감정으로 가사를 분류해주는 Word2Vec과 LSTM을 활용한 이별 가사 감정 분류 방법을 제안한다.

SYMMER: A Systematic Approach to Multiple Musical Emotion Recognition

  • Lee, Jae-Sung;Jo, Jin-Hyuk;Lee, Jae-Joon;Kim, Dae-Won
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제11권2호
    • /
    • pp.124-128
    • /
    • 2011
  • Music emotion recognition is currently one of the most attractive research areas in music information retrieval. In order to use emotion as clues when searching for a particular music, several music based emotion recognizing systems are fundamentally utilized. In order to maximize user satisfaction, the recognition accuracy is very important. In this paper, we develop a new music emotion recognition system, which employs a multilabel feature selector and multilabel classifier. The performance of the proposed system is demonstrated using novel musical emotion data.

얼굴 감정을 이용한 시청자 감정 패턴 분석 및 흥미도 예측 연구 (A Study on Sentiment Pattern Analysis of Video Viewers and Predicting Interest in Video using Facial Emotion Recognition)

  • 조인구;공연우;전소이;조서영;이도훈
    • 한국멀티미디어학회논문지
    • /
    • 제25권2호
    • /
    • pp.215-220
    • /
    • 2022
  • Emotion recognition is one of the most important and challenging areas of computer vision. Nowadays, many studies on emotion recognition were conducted and the performance of models is also improving. but, more research is needed on emotion recognition and sentiment analysis of video viewers. In this paper, we propose an emotion analysis system the includes a sentiment analysis model and an interest prediction model. We analyzed the emotional patterns of people watching popular and unpopular videos and predicted the level of interest using the emotion analysis system. Experimental results showed that certain emotions were strongly related to the popularity of videos and the interest prediction model had high accuracy in predicting the level of interest.

Music Similarity Search Based on Music Emotion Classification

  • Kim, Hyoung-Gook;Kim, Jang-Heon
    • The Journal of the Acoustical Society of Korea
    • /
    • 제26권3E호
    • /
    • pp.69-73
    • /
    • 2007
  • This paper presents an efficient algorithm to retrieve similar music files from a large archive of digital music database. Users are able to navigate and discover new music files which sound similar to a given query music file by searching for the archive. Since most of the methods for finding similar music files from a large database requires on computing the distance between a given query music file and every music file in the database, they are very time-consuming procedures. By measuring the acoustic distance between the pre-classified music files with the same type of emotion, the proposed method significantly speeds up the search process and increases the precision in comparison with the brute-force method.

Speech Emotion Recognition with SVM, KNN and DSVM

  • Hadhami Aouani ;Yassine Ben Ayed
    • International Journal of Computer Science & Network Security
    • /
    • 제23권8호
    • /
    • pp.40-48
    • /
    • 2023
  • Speech Emotions recognition has become the active research theme in speech processing and in applications based on human-machine interaction. In this work, our system is a two-stage approach, namely feature extraction and classification engine. Firstly, two sets of feature are investigated which are: the first one is extracting only 13 Mel-frequency Cepstral Coefficient (MFCC) from emotional speech samples and the second one is applying features fusions between the three features: Zero Crossing Rate (ZCR), Teager Energy Operator (TEO), and Harmonic to Noise Rate (HNR) and MFCC features. Secondly, we use two types of classification techniques which are: the Support Vector Machines (SVM) and the k-Nearest Neighbor (k-NN) to show the performance between them. Besides that, we investigate the importance of the recent advances in machine learning including the deep kernel learning. A large set of experiments are conducted on Surrey Audio-Visual Expressed Emotion (SAVEE) dataset for seven emotions. The results of our experiments showed given good accuracy compared with the previous studies.

노년여성의 족저 형태에 따른 인솔 패턴 개발 연구 (Development of Insole Pattern Depending on the Footprint Shape of Elder Women)

  • 이지은;권영아
    • 한국감성과학회:학술대회논문집
    • /
    • 한국감성과학회 2008년도 추계학술대회
    • /
    • pp.122-125
    • /
    • 2008
  • Even though many researchers studied the foot shape and dimension, those applications lacked. The purpose of this study was to develop insole pattern of elderly women according to footprint. Discrepancy in the classification criteria among of foot parameters complicates attempts for elderly women classification of foot sole. To develop a footprint-based classification technique for the classification of foot sole types by allowing simultaneous use of several parameters. Foot sole data from static standing footprints were recorded from 48 elderly women. The factors of footprint shape were determined. Cluster analysis was applied to obtain individual foot sole classifications. The classification model of foot insole is proposed for a classification of footprint in elderly women. An application of ANOVA, Duncan's analysis, frequency analysis, factor analysis, and cluster analysis have been made to footprint data. In order to make clear foot sole characteristics, the factors of footprint shape have been discussed. The results are as follows. The factors of footprint shape have been classified into four types: foot length, sole slope, outside sole slope, and foot width. The types of foot sole shape have been classified into four types: longed, shortened, outside sloped, and toes sloped.

  • PDF