• Title/Summary/Keyword: Emotion Classifier

Search Result 44, Processing Time 0.021 seconds

Optimal Facial Emotion Feature Analysis Method based on ASM-LK Optical Flow (ASM-LK Optical Flow 기반 최적 얼굴정서 특징분석 기법)

  • Ko, Kwang-Eun;Park, Seung-Min;Park, Jun-Heong;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.512-517
    • /
    • 2011
  • In this paper, we propose an Active Shape Model (ASM) and Lucas-Kanade (LK) optical flow-based feature extraction and analysis method for analyzing the emotional features from facial images. Considering the facial emotion feature regions are described by Facial Action Coding System, we construct the feature-related shape models based on the combination of landmarks and extract the LK optical flow vectors at each landmarks based on the centre pixels of motion vector window. The facial emotion features are modelled by the combination of the optical flow vectors and the emotional states of facial image can be estimated by the probabilistic estimation technique, such as Bayesian classifier. Also, we extract the optimal emotional features that are considered the high correlation between feature points and emotional states by using common spatial pattern (CSP) analysis in order to improvise the operational efficiency and accuracy of emotional feature extraction process.

Classification between Intentional and Natural Blinks in Infrared Vision Based Eye Tracking System

  • Kim, Song-Yi;Noh, Sue-Jin;Kim, Jin-Man;Whang, Min-Cheol;Lee, Eui-Chul
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.601-607
    • /
    • 2012
  • Objective: The aim of this study is to classify between intentional and natural blinks in vision based eye tracking system. Through implementing the classification method, we expect that the great eye tracking method will be designed which will perform well both navigation and selection interactions. Background: Currently, eye tracking is widely used in order to increase immersion and interest of user by supporting natural user interface. Even though conventional eye tracking system is well focused on navigation interaction by tracking pupil movement, there is no breakthrough selection interaction method. Method: To determine classification threshold between intentional and natural blinks, we performed experiment by capturing eye images including intentional and natural blinks from 12 subjects. By analyzing successive eye images, two features such as eye closed duration and pupil size variation after eye open were collected. Then, the classification threshold was determined by performing SVM(Support Vector Machine) training. Results: Experimental results showed that the average detection accuracy of intentional blinks was 97.4% in wearable eye tracking system environments. Also, the detecting accuracy in non-wearable camera environment was 92.9% on the basis of the above used SVM classifier. Conclusion: By combining two features using SVM, we could implement the accurate selection interaction method in vision based eye tracking system. Application: The results of this research might help to improve efficiency and usability of vision based eye tracking method by supporting reliable selection interaction scheme.

Detection of Face Expression Based on Deep Learning (딥러닝 기반의 얼굴영상에서 표정 검출에 관한 연구)

  • Won, Chulho;Lee, Bub-ki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.8
    • /
    • pp.917-924
    • /
    • 2018
  • Recently, researches using LBP and SVM have been performed as one of the image - based methods for facial emotion recognition. LBP, introduced by Ojala et al., is widely used in the field of image recognition due to its high discrimination of objects, robustness to illumination change, and simple operation. In addition, CS(Center-Symmetric)-LBP was used as a modified form of LBP, which is widely used for face recognition. In this paper, we propose a method to detect four facial expressions such as expressionless, happiness, surprise, and anger using deep neural network. The validity of the proposed method is verified using accuracy. Based on the existing LBP feature parameters, it was confirmed that the method using the deep neural network is superior to the method using the Adaboost and SVM classifier.

Emotional Human Body Recognition by Using Extraction of Human Body from Image (인간의 움직임 추출을 이용한 감정적인 행동 인식 시스템 개발)

  • Song, Min-Kook;Joo, Young-Hoon;Park, Jin-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.214-216
    • /
    • 2006
  • Expressive face and human body gestures are among the main non-verbal communication channels in human-human interaction. Understanding human emotions through body gesture is one of the necessary skills both for humans and also for the computers to interact with their human counterparts. Gesture analysis is consisted of several processes such as detecting of hand, extracting feature, and recognizing emotions. Skin color information for tracking hand gesture is obtained from face detection region. We have revealed relationships between paricular body movements and specific emotions by using HMM(Hidden Markov Model) classifier. Performance evaluation of emotional human body recognition has experimented.

  • PDF

Automatic Emotion Classification of Music Signals Using MDCT-Driven Timbre and Tempo Features

  • Kim, Hyoung-Gook;Eom, Ki-Wan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.2E
    • /
    • pp.74-78
    • /
    • 2006
  • This paper proposes an effective method for classifying emotions of the music from its acoustical signals. Two feature sets, timbre and tempo, are directly extracted from the modified discrete cosine transform coefficients (MDCT), which are the output of partial MP3 (MPEG 1 Layer 3) decoder. Our tempo feature extraction method is based on the long-term modulation spectrum analysis. In order to effectively combine these two feature sets with different time resolution in an integrated system, a classifier with two layers based on AdaBoost algorithm is used. In the first layer the MDCT-driven timbre features are employed. By adding the MDCT-driven tempo feature in the second layer, the classification precision is improved dramatically.

Intensified Sentiment Analysis of Customer Product Reviews Using Acoustic and Textual Features

  • Govindaraj, Sureshkumar;Gopalakrishnan, Kumaravelan
    • ETRI Journal
    • /
    • v.38 no.3
    • /
    • pp.494-501
    • /
    • 2016
  • Sentiment analysis incorporates natural language processing and artificial intelligence and has evolved as an important research area. Sentiment analysis on product reviews has been used in widespread applications to improve customer retention and business processes. In this paper, we propose a method for performing an intensified sentiment analysis on customer product reviews. The method involves the extraction of two feature sets from each of the given customer product reviews, a set of acoustic features (representing emotions) and a set of lexical features (representing sentiments). These sets are then combined and used in a supervised classifier to predict the sentiments of customers. We use an audio speech dataset prepared from Amazon product reviews and downloaded from the YouTube portal for the purposes of our experimental evaluations.

Emotional Human Body Recognition by Using Extraction of Human Body from Image (인간의 움직임 추출을 이용한 감정적인 행동 인식 시스템 개발)

  • Song, Min-Kook;Park, Jin-Bae;So, Je-Yoon;Joo, Young-Hoon
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.11a
    • /
    • pp.348-351
    • /
    • 2006
  • 영상을 통한 감정 인식 기술은 사회의 여러 분야에서 필요성이 대두되고 있음에도 불구하고 인식 과정의 어려움으로 인해 풀리지 않는 문제로 남아 있다. 인간의 움직임을 이용한 감정 인식 기술은 많은 응용이 가능하기 때문에 개발의 필요성이 증대되고 있다. 영상을 통해 감정을 인식하는 시스템은 매우 다양한 기법들이 사용되는 복합적인 시스템이다. 본 논문에서는 이전에 연구된 움직임 추출 방법들을 바탕으로 한 새로운 감정 인식 시스템을 제안한다. 제안된 시스템은 은닉 마르코프 모델을 통해 동정된 분류기를 이용하여 감정을 인식한다. 제안된 시스템의 성능을 평가하기 위해 평가데이터 베이스가 구축되었으며, 이를 통해 제안된 감정 인식 시스템의 성능을 확인하였다.

  • PDF

Emotion Recognition System Using Neural Networks in Textile Images (신경망을 이용한 텍스타일 영상에서의 감성인식 시스템)

  • Kim, Na-Yeon;Shin, Yun-Hee;Kim, Soo-Jeong;Kim, Jee-In;Jeong, Karp-Joo;Koo, Hyun-Jin;Kim, Eun-Yi
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.9
    • /
    • pp.869-879
    • /
    • 2007
  • This paper proposes a neural network based approach for automatic human emotion recognition in textile images. To investigate the correlation between the emotion and the pattern, the survey is conducted on 20 peoples, which shows that a emotion is deeply affected by a pattern. Accordingly, a neural network based classifier is used for recognizing the pattern included in textiles. In our system, two schemes are used for describing the pattern; raw-pixel data extraction scheme using auto-regressive method (RDES) and wavelet transformed data extraction scheme (WTDES). To assess the validity of the proposed method, it was applied to recognize the human emotions in 100 textiles, and the results shows that using WTDES guarantees better performance than using RDES. The former produced the accuracy of 71%, while the latter produced the accuracy of 90%. Although there are some differences according to the data extraction scheme, the proposed method shows the accuracy of 80% on average. This result confirmed that our system has the potential to be applied for various application such as textile industry and e-business.

Automatic extraction of similar poetry for study of literary texts: An experiment on Hindi poetry

  • Prakash, Amit;Singh, Niraj Kumar;Saha, Sujan Kumar
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.413-425
    • /
    • 2022
  • The study of literary texts is one of the earliest disciplines practiced around the globe. Poetry is artistic writing in which words are carefully chosen and arranged for their meaning, sound, and rhythm. Poetry usually has a broad and profound sense that makes it difficult to be interpreted even by humans. The essence of poetry is Rasa, which signifies mood or emotion. In this paper, we propose a poetry classification-based approach to automatically extract similar poems from a repository. Specifically, we perform a novel Rasa-based classification of Hindi poetry. For the task, we primarily used lexical features in a bag-of-words model trained using the support vector machine classifier. In the model, we employed Hindi WordNet, Latent Semantic Indexing, and Word2Vec-based neural word embedding. To extract the rich feature vectors, we prepared a repository containing 37 717 poems collected from various sources. We evaluated the performance of the system on a manually constructed dataset containing 945 Hindi poems. Experimental results demonstrated that the proposed model attained satisfactory performance.

Multi-Time Window Feature Extraction Technique for Anger Detection in Gait Data

  • Beom Kwon;Taegeun Oh
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.4
    • /
    • pp.41-51
    • /
    • 2023
  • In this paper, we propose a technique of multi-time window feature extraction for anger detection in gait data. In the previous gait-based emotion recognition methods, the pedestrian's stride, time taken for one stride, walking speed, and forward tilt angles of the neck and thorax are calculated. Then, minimum, mean, and maximum values are calculated for the entire interval to use them as features. However, each feature does not always change uniformly over the entire interval but sometimes changes locally. Therefore, we propose a multi-time window feature extraction technique that can extract both global and local features, from long-term to short-term. In addition, we also propose an ensemble model that consists of multiple classifiers. Each classifier is trained with features extracted from different multi-time windows. To verify the effectiveness of the proposed feature extraction technique and ensemble model, a public three-dimensional gait dataset was used. The simulation results demonstrate that the proposed ensemble model achieves the best performance compared to machine learning models trained with existing feature extraction techniques for four performance evaluation metrics.