• Title/Summary/Keyword: Face expression

Search Result 453, Processing Time 0.025 seconds

The Implementation and Analysis of Facial Expression Customization for a Social Robot (소셜 로봇의 표정 커스터마이징 구현 및 분석)

  • Jiyeon Lee;Haeun Park;Temirlan Dzhoroev;Byounghern Kim;Hui Sung Lee
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.2
    • /
    • pp.203-215
    • /
    • 2023
  • Social robots, which are mainly used by individuals, emphasize the importance of human-robot relationships (HRR) more compared to other types of robots. Emotional expression in robots is one of the key factors that imbue HRR with value; emotions are mainly expressed through the face. However, because of cultural and preference differences, the desired robot facial expressions differ subtly depending on the user. It was expected that a robot facial expression customization tool may mitigate such difficulties and consequently improve HRR. To prove this, we created a robot facial expression customization tool and a prototype robot. We implemented a suitable emotion engine for generating robot facial expressions in a dynamic human-robot interaction setting. We conducted experiments and the users agreed that the availability of a customized version of the robot has a more positive effect on HRR than a predefined version of the robot. Moreover, we suggest recommendations for future improvements of the customization process of robot facial expression.

Japanese Political Interviews: The Integration of Conversation Analysis and Facial Expression Analysis

  • Kinoshita, Ken
    • Asian Journal for Public Opinion Research
    • /
    • v.8 no.3
    • /
    • pp.180-196
    • /
    • 2020
  • This paper considers Japanese political interviews to integrate conversation and facial expression analysis. The behaviors of political leaders will be disclosed by analyzing questions and responses by using the turn-taking system in conversation analysis. Additionally, audiences who cannot understand verbal expressions alone will understand the psychology of political leaders by analyzing their facial expressions. Integral analyses promote understanding of the types of facial and verbal expressions of politicians and their effect on public opinion. Politicians have unique techniques to convince people. If people do not know these techniques and ways of various expressions, they will become confused, and politics may fall into populism as a result. To avoid this, a complete understanding of verbal and non-verbal behaviors is needed. This paper presents two analyses. The first analysis is a qualitative analysis that deals with Prime Minister Shinzō Abe and shows that differences between words and happy facial expressions occur. That result indicates that Abe expresses disgusted facial expressions when faced with the same question from an interviewer. The second is a quantitative multiple regression analysis where the dependent variables are six facial expressions: happy, sad, angry, surprised, scared, and disgusted. The independent variable is when politicians have a threat to face. Political interviews that directly inform audiences are used as a tool by politicians. Those interviews play an important role in modelling public opinion. The audience watches political interviews, and these mold support to the party. Watching political interviews contributes to the decision to support the political party when they vote in a coming election.

Emotion Recognition and Expression System of Robot Based on 2D Facial Image (2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템)

  • Lee, Dong-Hoon;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.4
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

A Study on the Spatial Characteristics in the Tectonic of the Barcelona Pavilion (바로셀로나 파빌리온의 구축적 공간 특성에 관한 연구)

  • 양재혁
    • Korean Institute of Interior Design Journal
    • /
    • no.33
    • /
    • pp.19-26
    • /
    • 2002
  • This study analyzed the characteristics of spatial expression in the Barcelona Pavilion to be based on tectonics. Mies pointed out the image of materiality about the material rather than the process of tectonics using the material, and he also expressed demateriality in the image of each material through the reflection. To be liberated from structural matters, the wall has been introduced. He intended to show the design plan as the independent structural system, however, the wall actually seems to be supporting the roof that shows rather clearly self-contradictory because of the expression of materiality in the material. In terms of architectural elements; wall, roof, column, floor, and so forth, tectonic expression and abstract aesthetics stands face to face, because of hiding the productional process and transforming to line and surface in the image of materiality. The interior of the glass wall seems fairly closed space from the exterior, because materiality and reflection of materials of columns and podium. The character of experiential space is inconsistent and fragmentary because of the splendid images from maternality and reflection on the wall, and collision with the reality and the image the wall reflects, even though the geometrical space of the Pavilion's plan has the mutual penetrability and organic character.

Recognizing Human Facial Expressions and Gesture from Image Sequence (연속 영상에서의 얼굴표정 및 제스처 인식)

  • 한영환;홍승홍
    • Journal of Biomedical Engineering Research
    • /
    • v.20 no.4
    • /
    • pp.419-425
    • /
    • 1999
  • In this paper, we present an algorithm of real time facial expression and gesture recognition for image sequence on the gray level. A mixture algorithm of a template matching and knowledge based geometrical consideration of a face were adapted to locate the face area in input image. And optical flow method applied on the area to recognize facial expressions. Also, we suggest hand area detection algorithm form a background image by analyzing entropy in an image. With modified hand area detection algorithm, it was possible to recognize hand gestures from it. As a results, the experiments showed that the suggested algorithm was good at recognizing one's facial expression and hand gesture by detecting a dominant motion area on images without getting any limits from the background image.

  • PDF

Person-Independent Facial Expression Recognition with Histograms of Prominent Edge Directions

  • Makhmudkhujaev, Farkhod;Iqbal, Md Tauhid Bin;Arefin, Md Rifat;Ryu, Byungyong;Chae, Oksam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.6000-6017
    • /
    • 2018
  • This paper presents a new descriptor, named Histograms of Prominent Edge Directions (HPED), for the recognition of facial expressions in a person-independent environment. In this paper, we raise the issue of sampling error in generating the code-histogram from spatial regions of the face image, as observed in the existing descriptors. HPED describes facial appearance changes based on the statistical distribution of the top two prominent edge directions (i.e., primary and secondary direction) captured over small spatial regions of the face. Compared to existing descriptors, HPED uses a smaller number of code-bins to describe the spatial regions, which helps avoid sampling error despite having fewer samples while preserving the valuable spatial information. In contrast to the existing Histogram of Oriented Gradients (HOG) that uses the histogram of the primary edge direction (i.e., gradient orientation) only, we additionally consider the histogram of the secondary edge direction, which provides more meaningful shape information related to the local texture. Experiments on popular facial expression datasets demonstrate the superior performance of the proposed HPED against existing descriptors in a person-independent environment.

A New 3D Active Camera System for Robust Face Recognition by Correcting Pose Variation

  • Kim, Young-Ouk;Jang, Sung-Ho;Park, Chang-Woo;Sung, Ha-Gyeong;Kwon, Oh-Yun;Paik, Joon-Ki
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1485-1490
    • /
    • 2004
  • Recently, we have remarkable developments in intelligent robot systems. The remarkable features of intelligent robot are that it can track user, does face recognition and vital for many surveillance based systems. Advantage of face recognition when compared with other biometrics recognition is that coerciveness and contact that usually exist when we acquire characteristics do not exist in face recognition. However, the accuracy of face recognition is lower than other biometric recognition due to decrease in dimension from of image acquisition step and various changes associated with face pose and background. Factors that deteriorate performance of face recognition are many such as distance from camera to face, lighting change, pose change, and change of facial expression. In this paper, we implement a new 3D active camera system to prevent various pose variation that influence face recognition performance and propose face recognition algorithm for intelligent surveillance system and mobile robot system.

  • PDF

Accurate Visual Working Memory under a Positive Emotional Expression in Face (얼굴표정의 긍정적 정서에 의한 시각작업기억 향상 효과)

  • Han, Ji-Eun;Hyun, Joo-Seok
    • Science of Emotion and Sensibility
    • /
    • v.14 no.4
    • /
    • pp.605-616
    • /
    • 2011
  • The present study examined memory accuracy for faces with positive, negative and neutral emotional expressions to test whether their emotional content can affect visual working memory (VWM) performance. Participants remembered a set of face pictures in which facial expressions of the faces were randomly assigned from pleasant, unpleasant and neutral emotional categories. Participants' task was to report presence or absence of an emotion change in the faces by comparing the remembered set against another set of test faces displayed after a short delay. The change detection accuracies of the pleasant, unpleasant and neutral face conditions were compared under two memory exposure duration of 500ms vs. 1000ms. Under the duration of 500ms, the accuracy in the pleasant condition was higher than both unpleasant and neutral conditions. However the difference disappeared when the duration was extended to 1000ms. The results indicate that a positive facial expression can improve VWM accuracy relative to the negative or positive expressions especially when there is not enough time for forming durable VWM representations.

  • PDF

Analysis of Understanding Using Deep Learning Facial Expression Recognition for Real Time Online Lectures (딥러닝 표정 인식을 활용한 실시간 온라인 강의 이해도 분석)

  • Lee, Jaayeon;Jeong, Sohyun;Shin, You Won;Lee, Eunhye;Ha, Yubin;Choi, Jang-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.12
    • /
    • pp.1464-1475
    • /
    • 2020
  • Due to the spread of COVID-19, the online lecture has become more prevalent. However, it was found that a lot of students and professors are experiencing lack of communication. This study is therefore designed to improve interactive communication between professors and students in real-time online lectures. To do so, we explore deep learning approaches for automatic recognition of students' facial expressions and classification of their understanding into 3 classes (Understand / Neutral / Not Understand). We use 'BlazeFace' model for face detection and 'ResNet-GRU' model for facial expression recognition (FER). We name this entire process 'Degree of Understanding (DoU)' algorithm. DoU algorithm can analyze a multitude of students collectively and present the result in visualized statistics. To our knowledge, this study has great significance in that this is the first study offers the statistics of understanding in lectures using FER. As a result, the algorithm achieved rapid speed of 0.098sec/frame with high accuracy of 94.3% in CPU environment, demonstrating the potential to be applied to real-time online lectures. DoU Algorithm can be extended to various fields where facial expressions play important roles in communications such as interactions with hearing impaired people.

A Study on Enhancing the Performance of Detecting Lip Feature Points for Facial Expression Recognition Based on AAM (AAM 기반 얼굴 표정 인식을 위한 입술 특징점 검출 성능 향상 연구)

  • Han, Eun-Jung;Kang, Byung-Jun;Park, Kang-Ryoung
    • The KIPS Transactions:PartB
    • /
    • v.16B no.4
    • /
    • pp.299-308
    • /
    • 2009
  • AAM(Active Appearance Model) is an algorithm to extract face feature points with statistical models of shape and texture information based on PCA(Principal Component Analysis). This method is widely used for face recognition, face modeling and expression recognition. However, the detection performance of AAM algorithm is sensitive to initial value and the AAM method has the problem that detection error is increased when an input image is quite different from training data. Especially, the algorithm shows high accuracy in case of closed lips but the detection error is increased in case of opened lips and deformed lips according to the facial expression of user. To solve these problems, we propose the improved AAM algorithm using lip feature points which is extracted based on a new lip detection algorithm. In this paper, we select a searching region based on the face feature points which are detected by AAM algorithm. And lip corner points are extracted by using Canny edge detection and histogram projection method in the selected searching region. Then, lip region is accurately detected by combining color and edge information of lip in the searching region which is adjusted based on the position of the detected lip corners. Based on that, the accuracy and processing speed of lip detection are improved. Experimental results showed that the RMS(Root Mean Square) error of the proposed method was reduced as much as 4.21 pixels compared to that only using AAM algorithm.