• Title/Summary/Keyword: facial expressions

Search Result 323, Processing Time 0.027 seconds

Analyzing facial expression of a learner in e-Learning system (e-Learning에서 나타날 수 있는 학습자의 얼굴 표정 분석)

  • Park, Jung-Hyun;Jeong, Sang-Mok;Lee, Wan-Bok;Song, Ki-Sang
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.05a
    • /
    • pp.160-163
    • /
    • 2006
  • If an instruction system understood the interest and activeness of a learner in real time, it could provide some interesting factors when a learner is tired of learning. It could work as an adaptive tutoring system to help a learner to understand something difficult to understand. Currently the area of the facial expression recognition mainly deals with the facial expression of adults focusing on anger, hatred, fear, sadness, surprising and gladness. These daily facial expressions couldn't be one of expressions of a learner in e-Learning. They should first study the facial expressions of a learner in e-Learning to recognize the feeling of a learner. Collecting as many expression pictures as possible, they should study the meaning of each expression. This study, as a prior research, analyzes the feelings of learners and facial expressions of learners in e-Learning in relation to the feelings to establish the facial expressions database.

  • PDF

Analysis and synthesis of facial expressions in knowledge-based image coding (지적화상부호화에 있어서 표정분석과 합성)

  • ;Harashima, Hiroshi;Takebe, Tsyosi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1989.10a
    • /
    • pp.451-456
    • /
    • 1989
  • New image coding system for facial images called 'Knowledge-based image coding' is described, in which input image is analyzed and output image is synthesized using analysis results. Analysis and synthesis method of facial expressions are presented. Synthesis rules are determined on the basis of facial muscles and are also used in analysis process to produce a faithful reconstruction of the original image. A number of examples are shown.

  • PDF

Interactive Facial Expression Animation of Motion Data using Sammon's Mapping (Sammon 매핑을 사용한 모션 데이터의 대화식 표정 애니메이션)

  • Kim, Sung-Ho
    • The KIPS Transactions:PartA
    • /
    • v.11A no.2
    • /
    • pp.189-194
    • /
    • 2004
  • This paper describes method to distribute much high-dimensional facial expression motion data to 2 dimensional space, and method to create facial expression animation by select expressions that want by realtime as animator navigates this space. In this paper composed expression space using about 2400 facial expression frames. The creation of facial space is ended by decision of shortest distance between any two expressions. The expression space as manifold space expresses approximately distance between two points as following. After define expression state vector that express state of each expression using distance matrix which represent distance between any markers, if two expression adjoin, regard this as approximate about shortest distance between two expressions. So, if adjacency distance is decided between adjacency expressions, connect these adjacency distances and yield shortest distance between any two expression states, use Floyd algorithm for this. To materialize expression space that is high-dimensional space, project on 2 dimensions using Sammon's Mapping. Facial animation create by realtime with animators navigating 2 dimensional space using user interface.

3D Emotional Avatar Creation and Animation using Facial Expression Recognition (표정 인식을 이용한 3D 감정 아바타 생성 및 애니메이션)

  • Cho, Taehoon;Jeong, Joong-Pill;Choi, Soo-Mi
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.9
    • /
    • pp.1076-1083
    • /
    • 2014
  • We propose an emotional facial avatar that portrays the user's facial expressions with an emotional emphasis, while achieving visual and behavioral realism. This is achieved by unifying automatic analysis of facial expressions and animation of realistic 3D faces with details such as facial hair and hairstyles. To augment facial appearance according to the user's emotions, we use emotional templates representing typical emotions in an artistic way, which can be easily combined with the skin texture of the 3D face at runtime. Hence, our interface gives the user vision-based control over facial animation of the emotional avatar, easily changing its moods.

Emotional Expression of the Virtual Influencer "Luo Tianyi(洛天依)" in Digital'

  • Guangtao Song;Albert Young Choi
    • International Journal of Advanced Culture Technology
    • /
    • v.12 no.2
    • /
    • pp.375-385
    • /
    • 2024
  • In the context of contemporary digital media, virtual influencers have become an increasingly important form of socialization and entertainment, in which emotional expression is a key factor in attracting viewers. In this study, we take Luo Tianyi, a Chinese virtual influencer, as an example to explore how emotions are expressed and perceived through facial expressions in different types of videos. Using Paul Ekman's Facial Action Coding System (FACS) and six basic emotion classifications, the study systematically analyzes Luo Tianyi's emotional expressions in three types of videos, namely Music show, Festivals and Brand Cooperation. During the study, Luo Tianyi's facial expressions and emotional expressions were analyzed through rigorous coding and categorization, as well as matching the context of the video content. The results show that Enjoyment is the most frequently expressed emotion by Luo Tianyi, reflecting the centrality of positive emotions in content creation. Meanwhile, the presence of other emotion types reveals the virtual influencer's efforts to create emotionally rich and authentic experiences. The frequency and variety of emotions expressed in different video genres indicate Luo Tianyi's diverse strategies for communicating and connecting with viewers in different contexts. The study provides an empirical basis for understanding and utilizing virtual influencers' emotional expressions, and offers valuable insights for digital media content creators to design emotional expression strategies. Overall, this study is valuable for understanding the complexity of virtual influencer emotional expression and its importance in digital media strategy.

Feature-Oriented Adaptive Motion Analysis For Recognizing Facial Expression (특징점 기반의 적응적 얼굴 움직임 분석을 통한 표정 인식)

  • Noh, Sung-Kyu;Park, Han-Hoon;Shin, Hong-Chang;Jin, Yoon-Jong;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.667-674
    • /
    • 2007
  • Facial expressions provide significant clues about one's emotional state; however, it always has been a great challenge for machine to recognize facial expressions effectively and reliably. In this paper, we report a method of feature-based adaptive motion energy analysis for recognizing facial expression. Our method optimizes the information gain heuristics of ID3 tree and introduces new approaches on (1) facial feature representation, (2) facial feature extraction, and (3) facial feature classification. We use minimal reasonable facial features, suggested by the information gain heuristics of ID3 tree, to represent the geometric face model. For the feature extraction, our method proceeds as follows. Features are first detected and then carefully "selected." Feature "selection" is finding the features with high variability for differentiating features with high variability from the ones with low variability, to effectively estimate the feature's motion pattern. For each facial feature, motion analysis is performed adaptively. That is, each facial feature's motion pattern (from the neutral face to the expressed face) is estimated based on its variability. After the feature extraction is done, the facial expression is classified using the ID3 tree (which is built from the 1728 possible facial expressions) and the test images from the JAFFE database. The proposed method excels and overcomes the problems aroused by previous methods. First of all, it is simple but effective. Our method effectively and reliably estimates the expressive facial features by differentiating features with high variability from the ones with low variability. Second, it is fast by avoiding complicated or time-consuming computations. Rather, it exploits few selected expressive features' motion energy values (acquired from intensity-based threshold). Lastly, our method gives reliable recognition rates with overall recognition rate of 77%. The effectiveness of the proposed method will be demonstrated from the experimental results.

  • PDF

A Study on the Emoticon Extraction based on Facial Expression Recognition using Deep Learning Technique (딥 러닝 기술 이용한 얼굴 표정 인식에 따른 이모티콘 추출 연구)

  • Jeong, Bong-Jae;Zhang, Fan
    • Korean Journal of Artificial Intelligence
    • /
    • v.5 no.2
    • /
    • pp.43-53
    • /
    • 2017
  • In this paper, the pattern of extracting the same expression is proposed by using the Android intelligent device to identify the facial expression. The understanding and expression of expression are very important to human computer interaction, and the technology to identify human expressions is very popular. Instead of searching for the emoticons that users often use, you can identify facial expressions with acamera, which is a useful technique that can be used now. This thesis puts forward the technology of the third data is available on the website of the set, use the content to improve the infrastructure of the facial expression recognition accuracy, in order to improve the synthesis of neural network algorithm, making the facial expression recognition model, the user's facial expressions and similar e xpressions, reached 66%.It doesn't need to search for emoticons. If you use the camera to recognize the expression, itwill appear emoticons immediately. So this service is the emoticons used when people send messages to others, and it can feel a lot of convenience. In countless emoticons, there is no need to find emoticons, which is an increasing trend in deep learning. So we need to use more suitable algorithm for expression recognition, and then improve accuracy.

Interactive Realtime Facial Animation with Motion Data (모션 데이터를 사용한 대화식 실시간 얼굴 애니메이션)

  • 김성호
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.4
    • /
    • pp.569-578
    • /
    • 2003
  • This paper presents a method in which the user produces a real-time facial animation by navigating in the space of facial expressions created from a great number of captured facial expressions. The core of the method is define the distance between each facial expressions and how to distribute into suitable intuitive space using it and user interface to generate realtime facial expression animation in this space. We created the search space from about 2,400 raptured facial expression frames. And, when the user free travels through the space, facial expressions located on the path are displayed in sequence. To visually distribute about 2,400 captured racial expressions in the space, we need to calculate distance between each frames. And we use Floyd's algorithm to get all-pairs shortest path between each frames, then get the manifold distance using it. The distribution of frames in intuitive space apply a multi-dimensional scaling using manifold distance of facial expression frames, and distributed in 2D space. We distributed into intuitive space with keep distance between facial expression frames in the original form. So, The method presented at this paper has large advantage that free navigate and not limited into intuitive space to generate facial expression animation because of always existing the facial expression frames to navigate by user. Also, It is very efficient that confirm and regenerate nth realtime generation using user interface easy to use for facial expression animation user want.

  • PDF

Facial Expression Classification through Covariance Matrix Correlations

  • Odoyo, Wilfred O.;Cho, Beom-Joon
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.5
    • /
    • pp.505-509
    • /
    • 2011
  • This paper attempts to classify known facial expressions and to establish the correlations between two regions (eye + eyebrows and mouth) in identifying the six prototypic expressions. Covariance is used to describe region texture that captures facial features for classification. The texture captured exhibit the pattern observed during the execution of particular expressions. Feature matching is done by simple distance measure between the probe and the modeled representations of eye and mouth components. We target JAFFE database in this experiment to validate our claim. A high classification rate is observed from the mouth component and the correlation between the two (eye and mouth) components. Eye component exhibits a lower classification rate if used independently.

The Implementation and Analysis of Facial Expression Customization for a Social Robot (소셜 로봇의 표정 커스터마이징 구현 및 분석)

  • Jiyeon Lee;Haeun Park;Temirlan Dzhoroev;Byounghern Kim;Hui Sung Lee
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.2
    • /
    • pp.203-215
    • /
    • 2023
  • Social robots, which are mainly used by individuals, emphasize the importance of human-robot relationships (HRR) more compared to other types of robots. Emotional expression in robots is one of the key factors that imbue HRR with value; emotions are mainly expressed through the face. However, because of cultural and preference differences, the desired robot facial expressions differ subtly depending on the user. It was expected that a robot facial expression customization tool may mitigate such difficulties and consequently improve HRR. To prove this, we created a robot facial expression customization tool and a prototype robot. We implemented a suitable emotion engine for generating robot facial expressions in a dynamic human-robot interaction setting. We conducted experiments and the users agreed that the availability of a customized version of the robot has a more positive effect on HRR than a predefined version of the robot. Moreover, we suggest recommendations for future improvements of the customization process of robot facial expression.