• 제목/요약/키워드: Face expression

검색결과 453건 처리시간 0.024초

소셜 로봇의 표정 커스터마이징 구현 및 분석 (The Implementation and Analysis of Facial Expression Customization for a Social Robot)

  • 이지연;박하은;;김병헌;이희승
    • 로봇학회논문지
    • /
    • 제18권2호
    • /
    • pp.203-215
    • /
    • 2023
  • Social robots, which are mainly used by individuals, emphasize the importance of human-robot relationships (HRR) more compared to other types of robots. Emotional expression in robots is one of the key factors that imbue HRR with value; emotions are mainly expressed through the face. However, because of cultural and preference differences, the desired robot facial expressions differ subtly depending on the user. It was expected that a robot facial expression customization tool may mitigate such difficulties and consequently improve HRR. To prove this, we created a robot facial expression customization tool and a prototype robot. We implemented a suitable emotion engine for generating robot facial expressions in a dynamic human-robot interaction setting. We conducted experiments and the users agreed that the availability of a customized version of the robot has a more positive effect on HRR than a predefined version of the robot. Moreover, we suggest recommendations for future improvements of the customization process of robot facial expression.

Japanese Political Interviews: The Integration of Conversation Analysis and Facial Expression Analysis

  • Kinoshita, Ken
    • Asian Journal for Public Opinion Research
    • /
    • 제8권3호
    • /
    • pp.180-196
    • /
    • 2020
  • This paper considers Japanese political interviews to integrate conversation and facial expression analysis. The behaviors of political leaders will be disclosed by analyzing questions and responses by using the turn-taking system in conversation analysis. Additionally, audiences who cannot understand verbal expressions alone will understand the psychology of political leaders by analyzing their facial expressions. Integral analyses promote understanding of the types of facial and verbal expressions of politicians and their effect on public opinion. Politicians have unique techniques to convince people. If people do not know these techniques and ways of various expressions, they will become confused, and politics may fall into populism as a result. To avoid this, a complete understanding of verbal and non-verbal behaviors is needed. This paper presents two analyses. The first analysis is a qualitative analysis that deals with Prime Minister Shinzō Abe and shows that differences between words and happy facial expressions occur. That result indicates that Abe expresses disgusted facial expressions when faced with the same question from an interviewer. The second is a quantitative multiple regression analysis where the dependent variables are six facial expressions: happy, sad, angry, surprised, scared, and disgusted. The independent variable is when politicians have a threat to face. Political interviews that directly inform audiences are used as a tool by politicians. Those interviews play an important role in modelling public opinion. The audience watches political interviews, and these mold support to the party. Watching political interviews contributes to the decision to support the political party when they vote in a coming election.

2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템 (Emotion Recognition and Expression System of Robot Based on 2D Facial Image)

  • 이동훈;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제13권4호
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

바로셀로나 파빌리온의 구축적 공간 특성에 관한 연구 (A Study on the Spatial Characteristics in the Tectonic of the Barcelona Pavilion)

  • 양재혁
    • 한국실내디자인학회논문집
    • /
    • 제33호
    • /
    • pp.19-26
    • /
    • 2002
  • This study analyzed the characteristics of spatial expression in the Barcelona Pavilion to be based on tectonics. Mies pointed out the image of materiality about the material rather than the process of tectonics using the material, and he also expressed demateriality in the image of each material through the reflection. To be liberated from structural matters, the wall has been introduced. He intended to show the design plan as the independent structural system, however, the wall actually seems to be supporting the roof that shows rather clearly self-contradictory because of the expression of materiality in the material. In terms of architectural elements; wall, roof, column, floor, and so forth, tectonic expression and abstract aesthetics stands face to face, because of hiding the productional process and transforming to line and surface in the image of materiality. The interior of the glass wall seems fairly closed space from the exterior, because materiality and reflection of materials of columns and podium. The character of experiential space is inconsistent and fragmentary because of the splendid images from maternality and reflection on the wall, and collision with the reality and the image the wall reflects, even though the geometrical space of the Pavilion's plan has the mutual penetrability and organic character.

연속 영상에서의 얼굴표정 및 제스처 인식 (Recognizing Human Facial Expressions and Gesture from Image Sequence)

  • 한영환;홍승홍
    • 대한의용생체공학회:의공학회지
    • /
    • 제20권4호
    • /
    • pp.419-425
    • /
    • 1999
  • 본 논문에서는 흑백 동영상을 사용하여 얼굴 표정 및 제스처를 실시간으로 인식하는 시스템을 개발하였다. 얼굴 인식분야에서는 형판 정합법과 얼굴의 기하학적 고찰에 의한 사전지식을 바탕으로 한 방법을 혼합하여 사용하였다. 혼합 방법에 의해 입력영상에서 얼굴 부위만을 제한하였으며, 이 영역에 옵티컬 플로우를 적용하여 얼굴 표정을 인식하였다. 제스처 인식에서는 엔트로피를 분석하여 복잡한 배경영상으로부터 손 영역을 분리하는 방법을 제안하였으며 , 이 방법을 개선하여 손동작에 대한 제스처를 인식하였다. 실험 결과, 입력 영상의 배경에 크게 영향을 받지 않고서도 동일 영상에서 움직임이 큰 부위를 검출하여 얼굴의 표정 및 손 제스처를 실시간적으로 인식할 수 있었다.

  • PDF

Person-Independent Facial Expression Recognition with Histograms of Prominent Edge Directions

  • Makhmudkhujaev, Farkhod;Iqbal, Md Tauhid Bin;Arefin, Md Rifat;Ryu, Byungyong;Chae, Oksam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권12호
    • /
    • pp.6000-6017
    • /
    • 2018
  • This paper presents a new descriptor, named Histograms of Prominent Edge Directions (HPED), for the recognition of facial expressions in a person-independent environment. In this paper, we raise the issue of sampling error in generating the code-histogram from spatial regions of the face image, as observed in the existing descriptors. HPED describes facial appearance changes based on the statistical distribution of the top two prominent edge directions (i.e., primary and secondary direction) captured over small spatial regions of the face. Compared to existing descriptors, HPED uses a smaller number of code-bins to describe the spatial regions, which helps avoid sampling error despite having fewer samples while preserving the valuable spatial information. In contrast to the existing Histogram of Oriented Gradients (HOG) that uses the histogram of the primary edge direction (i.e., gradient orientation) only, we additionally consider the histogram of the secondary edge direction, which provides more meaningful shape information related to the local texture. Experiments on popular facial expression datasets demonstrate the superior performance of the proposed HPED against existing descriptors in a person-independent environment.

A New 3D Active Camera System for Robust Face Recognition by Correcting Pose Variation

  • Kim, Young-Ouk;Jang, Sung-Ho;Park, Chang-Woo;Sung, Ha-Gyeong;Kwon, Oh-Yun;Paik, Joon-Ki
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1485-1490
    • /
    • 2004
  • Recently, we have remarkable developments in intelligent robot systems. The remarkable features of intelligent robot are that it can track user, does face recognition and vital for many surveillance based systems. Advantage of face recognition when compared with other biometrics recognition is that coerciveness and contact that usually exist when we acquire characteristics do not exist in face recognition. However, the accuracy of face recognition is lower than other biometric recognition due to decrease in dimension from of image acquisition step and various changes associated with face pose and background. Factors that deteriorate performance of face recognition are many such as distance from camera to face, lighting change, pose change, and change of facial expression. In this paper, we implement a new 3D active camera system to prevent various pose variation that influence face recognition performance and propose face recognition algorithm for intelligent surveillance system and mobile robot system.

  • PDF

얼굴표정의 긍정적 정서에 의한 시각작업기억 향상 효과 (Accurate Visual Working Memory under a Positive Emotional Expression in Face)

  • 한지은;현주석
    • 감성과학
    • /
    • 제14권4호
    • /
    • pp.605-616
    • /
    • 2011
  • 본 연구는 시각작업기억에 저장이 요구되는 얼굴 자극이 보유한 긍정적, 부정적 그리고 중립적 정서가 기억 정확성에 미치는 영향을 조사하였다. 참가자들은 유쾌, 불쾌 및 무표정의 세 가지 표정 유형 중 하나가 무선적으로 부여된 얼굴들의 표정을 기억한 후 잠시 후 제시된 검사 얼굴들에 대한 대조를 통해 기억항목과 검사항목 간 얼굴 표정의 변화 유무를 보고하였다. 얼굴 표정의 변화에 대한 탐지정확도를 측정한 결과 기억항목의 노출시간이 500ms이었을 경우, 긍정적 표정을 보유한 기억항목에 대한 변화탐지는 부정 및 중립 표정에 비해 상대적으로 정확했다. 반면에 노출시간이 1000ms로 연장되자 이러한 차이는 관찰되지 않았다. 이러한 결과는 긍정적 정서가 시각작업기억의 정확성을 향상시킬 수 있음을 의미하며, 특히 긍정적 정서에 의한 기억 촉진 효과는 기억 표상 형성에 있어서 요구되는 시간이 상대적으로 촉박한 경우에 나타남을 의미한다. 따라서 본 연구는 작업기억과 정서간의 관계를 규명하기 위하여 비교적 단순한 과제인 변화탐지과제를 사용하여 긍정적 정서가 시각작업기억 표상 형성의 효율성을 향상시킨다는 것을 발견했다는 점에서 중요한 시사점을 제공한다.

  • PDF

딥러닝 표정 인식을 활용한 실시간 온라인 강의 이해도 분석 (Analysis of Understanding Using Deep Learning Facial Expression Recognition for Real Time Online Lectures)

  • 이자연;정소현;신유원;이은혜;하유빈;최장환
    • 한국멀티미디어학회논문지
    • /
    • 제23권12호
    • /
    • pp.1464-1475
    • /
    • 2020
  • Due to the spread of COVID-19, the online lecture has become more prevalent. However, it was found that a lot of students and professors are experiencing lack of communication. This study is therefore designed to improve interactive communication between professors and students in real-time online lectures. To do so, we explore deep learning approaches for automatic recognition of students' facial expressions and classification of their understanding into 3 classes (Understand / Neutral / Not Understand). We use 'BlazeFace' model for face detection and 'ResNet-GRU' model for facial expression recognition (FER). We name this entire process 'Degree of Understanding (DoU)' algorithm. DoU algorithm can analyze a multitude of students collectively and present the result in visualized statistics. To our knowledge, this study has great significance in that this is the first study offers the statistics of understanding in lectures using FER. As a result, the algorithm achieved rapid speed of 0.098sec/frame with high accuracy of 94.3% in CPU environment, demonstrating the potential to be applied to real-time online lectures. DoU Algorithm can be extended to various fields where facial expressions play important roles in communications such as interactions with hearing impaired people.

AAM 기반 얼굴 표정 인식을 위한 입술 특징점 검출 성능 향상 연구 (A Study on Enhancing the Performance of Detecting Lip Feature Points for Facial Expression Recognition Based on AAM)

  • 한은정;강병준;박강령
    • 정보처리학회논문지B
    • /
    • 제16B권4호
    • /
    • pp.299-308
    • /
    • 2009
  • AAM(Active Appearance Model)은 PCA(Principal Component Analysis)를 기반으로 객체의 형태(shape)와 질감(texture) 정보에 대한 통계적 모델을 통해 얼굴의 특징점을 검출하는 알고리즘으로 얼굴인식, 얼굴 모델링, 표정인식과 같은 응용에 널리 사용되고 있다. 하지만, AAM알고리즘은 초기 값에 민감하고 입력영상이 학습 데이터 영상과의 차이가 클 경우에는 검출 에러가 증가되는 문제가 있다. 특히, 입을 다문 입력얼굴 영상의 경우에는 비교적 높은 검출 정확도를 나타내지만, 사용자의 표정에 따라 입을 벌리거나 입의 모양이 변형된 얼굴 입력 영상의 경우에는 입술에 대한 검출 오류가 매우 증가되는 문제점이 있다. 이러한 문제점을 해결하기 위해 본 논문에서는 입술 특징점 검출을 통해 정확한 입술 영역을 검출한 후에 이 정보를 이용하여 AAM을 수행함으로써 얼굴 특징점 검출 정확성을 향상시키는 방법을 제안한다. 본 논문에서는 AAM으로 검출한 얼굴 특징점 정보를 기반으로 초기 입술 탐색 영역을 설정하고, 탐색 영역 내에서 Canny 경계 검출 및 히스토그램 프로젝션 방법을 이용하여 입술의 양 끝점을 추출한 후, 입술의 양 끝점을 기반으로 재설정된 탐색영역 내에서 입술의 칼라 정보와 에지 정보를 함께 결합함으로써 입술 검출의 정확도 및 처리속도를 향상시켰다. 실험결과, AAM 알고리즘을 단독으로 사용할 때보다, 제안한 방법을 사용하였을 경우 입술 특징점 검출 RMS(Root Mean Square) 에러가 4.21픽셀만큼 감소하였다.