• Title/Summary/Keyword: classification of expressions

Search Result 108, Processing Time 0.028 seconds

Emotion Recognition Based on Facial Expression by using Context-Sensitive Bayesian Classifier (상황에 민감한 베이지안 분류기를 이용한 얼굴 표정 기반의 감정 인식)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.13B no.7 s.110
    • /
    • pp.653-662
    • /
    • 2006
  • In ubiquitous computing that is to build computing environments to provide proper services according to user's context, human being's emotion recognition based on facial expression is used as essential means of HCI in order to make man-machine interaction more efficient and to do user's context-awareness. This paper addresses a problem of rigidly basic emotion recognition in context-sensitive facial expressions through a new Bayesian classifier. The task for emotion recognition of facial expressions consists of two steps, where the extraction step of facial feature is based on a color-histogram method and the classification step employs a new Bayesian teaming algorithm in performing efficient training and test. New context-sensitive Bayesian learning algorithm of EADF(Extended Assumed-Density Filtering) is proposed to recognize more exact emotions as it utilizes different classifier complexities for different contexts. Experimental results show an expression classification accuracy of over 91% on the test database and achieve the error rate of 10.6% by modeling facial expression as hidden context.

A Recognition Framework for Facial Expression by Expression HMM and Posterior Probability (표정 HMM과 사후 확률을 이용한 얼굴 표정 인식 프레임워크)

  • Kim, Jin-Ok
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.11 no.3
    • /
    • pp.284-291
    • /
    • 2005
  • I propose a framework for detecting, recognizing and classifying facial features based on learned expression patterns. The framework recognizes facial expressions by using PCA and expression HMM(EHMM) which is Hidden Markov Model (HMM) approach to represent the spatial information and the temporal dynamics of the time varying visual expression patterns. Because the low level spatial feature extraction is fused with the temporal analysis, a unified spatio-temporal approach of HMM to common detection, tracking and classification problems is effective. The proposed recognition framework is accomplished by applying posterior probability between current visual observations and previous visual evidences. Consequently, the framework shows accurate and robust results of recognition on as well simple expressions as basic 6 facial feature patterns. The method allows us to perform a set of important tasks such as facial-expression recognition, HCI and key-frame extraction.

Artificial Intelligence for Assistance of Facial Expression Practice Using Emotion Classification (감정 분류를 이용한 표정 연습 보조 인공지능)

  • Dong-Kyu, Kim;So Hwa, Lee;Jae Hwan, Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1137-1144
    • /
    • 2022
  • In this study, an artificial intelligence(AI) was developed to help with facial expression practice in order to express emotions. The developed AI used multimodal inputs consisting of sentences and facial images for deep neural networks (DNNs). The DNNs calculated similarities between the emotions predicted by the sentences and the emotions predicted by facial images. The user practiced facial expressions based on the situation given by sentences, and the AI provided the user with numerical feedback based on the similarity between the emotion predicted by sentence and the emotion predicted by facial expression. ResNet34 structure was trained on FER2013 public data to predict emotions from facial images. To predict emotions in sentences, KoBERT model was trained in transfer learning manner using the conversational speech dataset for emotion classification opened to the public by AIHub. The DNN that predicts emotions from the facial images demonstrated 65% accuracy, which is comparable to human emotional classification ability. The DNN that predicts emotions from the sentences achieved 90% accuracy. The performance of the developed AI was evaluated through experiments with changing facial expressions in which an ordinary person was participated.

A Study on Textile Expression Technique Influenced by Primitivism shown in Fashion Design (원시주의(Primitivism)를 반영한 패션디자인에서의 소재표현기법 연구)

  • Kim, Jin-Young;Kan, Ho-Sup
    • Journal of Fashion Business
    • /
    • v.14 no.5
    • /
    • pp.112-127
    • /
    • 2010
  • Primitivism is a concept that expresses and organizes natural feelings of human beings which is hard to be identified by a rigid definition. It means "staying in the beginning or the initial state, not evolving or developing, and not affected by human beings from the intact natural state". Based on this meaning, the artistic style features inherent natural beauties, as well as plain and inornate design. These features have been reflected in a variety of art pieces. The aesthetic features shown in the primitivism art pieces can be categorized into four different aspects: naturalness, folksiness, sentimentality, and humorousness. These features, influencing modern fashion, have been reinvented by a number of fashion designers. They also adopted ideas from the fancy clothes and ornaments created in carefree life style of the regions retaining their primitive cultures, such as Africa, Oceania, and Pacific coasts, and applied those ideas to various silhouette, colors, patterns, and textiles. Particularly as for textile expressions, they tried printing techniques using the patterns motivated from primitive folk symbols or the nature, applied objet of primitive materials and elaborated ornaments that represent folk and primitive feelings, and employed the primitive techniques such as knotting, crude cutting, or natural draping, to reinvent them as textile expressions in modern fashion.

Local Feature Based Facial Expression Recognition Using Adaptive Decision Tree (적응형 결정 트리를 이용한 국소 특징 기반 표정 인식)

  • Oh, Jihun;Ban, Yuseok;Lee, Injae;Ahn, Chunghyun;Lee, Sangyoun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.2
    • /
    • pp.92-99
    • /
    • 2014
  • This paper proposes the method of facial expression recognition based on decision tree structure. In the image of facial expression, ASM(Active Shape Model) and LBP(Local Binary Pattern) make the local features of a facial expressions extracted. The discriminant features gotten from local features make the two facial expressions of all combination classified. Through the sum of true related to classification, the combination of facial expression and local region are decided. The integration of branch classifications generates decision tree. The facial expression recognition based on decision tree shows better recognition performance than the method which doesn't use that.

A Study on Satirical Expression of Animal Cartoon & Animated Cartoon (동물 만화영상의 풍자적 표현 연구)

  • Lee, Hwa-Ja
    • Cartoon and Animation Studies
    • /
    • s.9
    • /
    • pp.266-282
    • /
    • 2005
  • Cartoon & Animated cartoon is consists of imaginal attributes and linguistic attributes, and it is closely connected with humor and satirical contents. And then various expressions using animals as matter communicate satirical attributes of a satire strongly and easily. On this article, techniques of satirical expression using animals in Cartoon & Animated cartoon media are studied and analyzed. By the method, it looks around briefly beginning from primitive cave paintings of the prehistoric age to various contemporary Cartoon & Animated cartoon character industries as historical background of Cartoon & Animated cartoon, and also arranges various types that literary expression and representation for visual expression techniques - metaphorical expressions, emblematic expressions, figure of speech and so forth - on literature. This attempt aims for presenting a basic analysis method that connecting and combining Cartoon & Animated cartoon media with humanistic classification and making database of existing data. These accumulated data will indicate cartoon and the action of meaning.

  • PDF

Analysis of the Correlation between Narrative and Emotions Displayed by Movie Characters through a Quantitative Analysis of Dialogues in a Movie (영화 대사의 정량적 분석을 통한 등장인물의 감정과 서사간의 상관성 연구)

  • You, Eun-Soon
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.6
    • /
    • pp.95-107
    • /
    • 2013
  • A linguistic element found in a movie, dialogue, plays a critical role in building up narrative structure. Still, analyses conducted on movies mostly focus on images due to the nature of a movie that conveys a story through its visual images while dialogue has either been underestimated or received less spotlight despite their importance. This study highlights the significance of lines in a movie. This study calls attention to dialogue, which has stayed out of the main focus and been on the periphery thus far when analyzing movies, so as to see how they contribute to constructing a narrative. It then spotlights the significance of dialogue in the movie. To this end, the study sorts out emotional expressions articulated by actors through their dialogues then to make polarity classification into affirmation and negation, followed by a quantitative analysis of how the polarity proportion of emotional expressions changes depending on the narrative structure. The study also suggests a narrative's relevance with emotions by pointing to dynamic emotional changes that shift between affirmation and negation depending on incidents, conflicts and resolution thereof throughout a movie.

Feature-Oriented Adaptive Motion Analysis For Recognizing Facial Expression (특징점 기반의 적응적 얼굴 움직임 분석을 통한 표정 인식)

  • Noh, Sung-Kyu;Park, Han-Hoon;Shin, Hong-Chang;Jin, Yoon-Jong;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.667-674
    • /
    • 2007
  • Facial expressions provide significant clues about one's emotional state; however, it always has been a great challenge for machine to recognize facial expressions effectively and reliably. In this paper, we report a method of feature-based adaptive motion energy analysis for recognizing facial expression. Our method optimizes the information gain heuristics of ID3 tree and introduces new approaches on (1) facial feature representation, (2) facial feature extraction, and (3) facial feature classification. We use minimal reasonable facial features, suggested by the information gain heuristics of ID3 tree, to represent the geometric face model. For the feature extraction, our method proceeds as follows. Features are first detected and then carefully "selected." Feature "selection" is finding the features with high variability for differentiating features with high variability from the ones with low variability, to effectively estimate the feature's motion pattern. For each facial feature, motion analysis is performed adaptively. That is, each facial feature's motion pattern (from the neutral face to the expressed face) is estimated based on its variability. After the feature extraction is done, the facial expression is classified using the ID3 tree (which is built from the 1728 possible facial expressions) and the test images from the JAFFE database. The proposed method excels and overcomes the problems aroused by previous methods. First of all, it is simple but effective. Our method effectively and reliably estimates the expressive facial features by differentiating features with high variability from the ones with low variability. Second, it is fast by avoiding complicated or time-consuming computations. Rather, it exploits few selected expressive features' motion energy values (acquired from intensity-based threshold). Lastly, our method gives reliable recognition rates with overall recognition rate of 77%. The effectiveness of the proposed method will be demonstrated from the experimental results.

  • PDF

Korean Facial Expression Emotion Recognition based on Image Meta Information (이미지 메타 정보 기반 한국인 표정 감정 인식)

  • Hyeong Ju Moon;Myung Jin Lim;Eun Hee Kim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.13 no.3
    • /
    • pp.9-17
    • /
    • 2024
  • Due to the recent pandemic and the development of ICT technology, the use of non-face-to-face and unmanned systems is expanding, and it is very important to understand emotions in communication in non-face-to-face situations. As emotion recognition methods for various facial expressions are required to understand emotions, artificial intelligence-based research is being conducted to improve facial expression emotion recognition in image data. However, existing research on facial expression emotion recognition requires high computing power and a lot of learning time because it utilizes a large amount of data to improve accuracy. To improve these limitations, this paper proposes a method of recognizing facial expressions using age and gender, which are image meta information, as a method of recognizing facial expressions with even a small amount of data. For facial expression emotion recognition, a face was detected using the Yolo Face model from the original image data, and age and gender were classified through the VGG model based on image meta information, and then seven emotions were recognized using the EfficientNet model. The accuracy of the proposed data classification learning model was higher as a result of comparing the meta-information-based data classification model with the model trained with all data.

Comparisons of C-kit, DOG1, CD34, PKC-θ and PDGFR-α Expressions in Gastrointestinal Stromal Tumors According to Histopathological Risk Classification

  • Kim, Ki-Sung;Song, Hye-Jung;Shin, Won-Sub;Song, Kang-Won
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.43 no.2
    • /
    • pp.48-56
    • /
    • 2011
  • Gastrointestinal stromal tumor (GIST) is a mesenchymal tumor and is associated with a specific immunophenotype index. It is very important to identify the specific immunophenotype and the diagnosis for the treatment GIST patients. Ninety two cases of GIST analyzed in this study were immuno-stained for c-kit, DOG1, CD34, PKC-${\theta}$, PDGFR-${\alpha}$. The rate of positive staining and statistical significance were then compared. In addition, the GISTs were analyzed as followings: very low risk, low risk, intermediate risk and high risk according to tumor size and nuclear division, and later correlated with clinical parameters. The results of the GIST positive stainings were: DOG1 (95.7%), PKC-${\theta}$ (90.2%), PDGFR-${\alpha}$ (88.0%), c-kit (87.0%) and CD34 (71.7%). Only DOG1 staining showed a statistical significance of p<0.05. It was identified in the classification system of histologic risk that staining expression of DOG1, PKC-${\theta}$, PDGFR-${\alpha}$ were significantly increased as histologic risk increases (p<0.05). However, clinical parameters such as age and sex of patients have no correlations with the classification system of histologic risk (p>0.05). Therefore, in this study, the expression of DOG1 showed statistical significance and DOG1, PKC-${\theta}$, PDGFR-${\alpha}$ staining increased significantly as the histologic risk increases in histologic classification system. Taken together, the DOG1 staining should be very effective for the diagnosis of GIST patients.

  • PDF