• Title/Summary/Keyword: 얼굴표정

Search Result 518, Processing Time 0.045 seconds

Real-time mask facial expression recognition using Tiny-YOLOv3 and ResNet50 (Tiny-YOLOv3와 ResNet50을 이용한 실시간 마스크 표정인식)

  • Park, Gyuri;Park, Nayeon;Kim, Seungwoo;Kim, Seunghye;Kim, Jinsan;Ko, Byungchul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.232-234
    • /
    • 2021
  • 최근 휴먼-컴퓨터 인터페이스, 가상현식, 증강현실, 지능형 자동차등에서 얼굴표정 인식에 대한 연구가 활발히 진행되고 있다. 얼굴표정인식 연구는 대부분 맨얼굴을 대상으로 하고 있지만 최근 코로나-19로 인해 마스크 착용한 사람들이 많아지면서, 마스크를 착용했을 때의 표정인식에 대한 필요성이 증가하고 있다. 본 논문은 마스크를 착용했을 때에도 실시간으로 표정 분류가 가능한 시스템개발을 목표로 구동에 필요한 알고리즘을 조사했고, 그 중 Tiny-YOLOv3와 ResNet50 알고리즘을 이용하기로 했다. 얼굴과 표정 데이터셋 등에서 모은 이미지 데이터를 사용하여 실행해 보고 그 적절성 및 성능에 대해 평가해 보았다.

  • PDF

Detection of Face-element for Facial Analysis (표정분석을 위한 얼굴 구성 요소 검출)

  • 이철희;문성룡
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.131-136
    • /
    • 2004
  • According to development of media, various information is recorded in media, expression is one during interesting information. Because expression includes of relationship of human inside. Intention of inside is expressed by gesture, but expression has more information. And, expression can manufacture voluntarily, include plan of inside on the man. Also, expression has unique character in a person, have alliance that do division possibility. In this paper, to analyze expression of USB camera animation, wish to detect facial building block. Because characteristic point by person's expression change exists on face component. For component detection, in animation one frame with Capture, grasp facial position, and separate face area, and detect characteristic points of face component.

Study of Facial Expression Recognition using Variable-sized Block (가변 크기 블록(Variable-sized Block)을 이용한 얼굴 표정 인식에 관한 연구)

  • Cho, Youngtak;Ryu, Byungyong;Chae, Oksam
    • Convergence Security Journal
    • /
    • v.19 no.1
    • /
    • pp.67-78
    • /
    • 2019
  • Most existing facial expression recognition methods use a uniform grid method that divides the entire facial image into uniform blocks when describing facial features. The problem of this method may include non-face backgrounds, which interferes with discrimination of facial expressions, and the feature of a face included in each block may vary depending on the position, size, and orientation of the face in the input image. In this paper, we propose a variable-size block method which determines the size and position of a block that best represents meaningful facial expression change. As a part of the effort, we propose the way to determine the optimal number, position and size of each block based on the facial feature points. For the evaluation of the proposed method, we generate the facial feature vectors using LDTP and construct a facial expression recognition system based on SVM. Experimental results show that the proposed method is superior to conventional uniform grid based method. Especially, it shows that the proposed method can adapt to the change of the input environment more effectively by showing relatively better performance than exiting methods in the images with large shape and orientation changes.

Accurate Visual Working Memory under a Positive Emotional Expression in Face (얼굴표정의 긍정적 정서에 의한 시각작업기억 향상 효과)

  • Han, Ji-Eun;Hyun, Joo-Seok
    • Science of Emotion and Sensibility
    • /
    • v.14 no.4
    • /
    • pp.605-616
    • /
    • 2011
  • The present study examined memory accuracy for faces with positive, negative and neutral emotional expressions to test whether their emotional content can affect visual working memory (VWM) performance. Participants remembered a set of face pictures in which facial expressions of the faces were randomly assigned from pleasant, unpleasant and neutral emotional categories. Participants' task was to report presence or absence of an emotion change in the faces by comparing the remembered set against another set of test faces displayed after a short delay. The change detection accuracies of the pleasant, unpleasant and neutral face conditions were compared under two memory exposure duration of 500ms vs. 1000ms. Under the duration of 500ms, the accuracy in the pleasant condition was higher than both unpleasant and neutral conditions. However the difference disappeared when the duration was extended to 1000ms. The results indicate that a positive facial expression can improve VWM accuracy relative to the negative or positive expressions especially when there is not enough time for forming durable VWM representations.

  • PDF

Interactive Facial Expression Animation of Motion Data using CCA (CCA 투영기법을 사용한 모션 데이터의 대화식 얼굴 표정 애니메이션)

  • Kim Sung-Ho
    • Journal of Internet Computing and Services
    • /
    • v.6 no.1
    • /
    • pp.85-93
    • /
    • 2005
  • This paper describes how to distribute high multi-dimensional facial expression data of vast quantity over a suitable space and produce facial expression animations by selecting expressions while animator navigates this space in real-time. We have constructed facial spaces by using about 2400 facial expression frames on this paper. These facial spaces are created by calculating of the shortest distance between two random expressions. The distance between two points In the space of expression, which is manifold space, is described approximately as following; When the linear distance of them is shorter than a decided value, if the two expressions are adjacent after defining the expression state vector of facial status using distance matrix expressing distance between two markers, this will be considered as the shortest distance (manifold distance) of the two expressions. Once the distance of those adjacent expressions was decided, We have taken a Floyd algorithm connecting these adjacent distances to yield the shortest distance of the two expressions. We have used CCA(Curvilinear Component Analysis) technique to visualize multi-dimensional spaces, the form of expressing space, into two dimensions. While the animators navigate this two dimensional spaces, they produce a facial animation by using user interface in real-time.

  • PDF

Recognizing Facial Expression Using 1-order Moment and Principal Component Analysis (1차 모멘트와 주요성분분석을 이용한 얼굴표정 인식)

  • Cho Yong-Hyun;Hong Seung-Jun
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.05a
    • /
    • pp.405-408
    • /
    • 2006
  • 본 논문에서는 영상의 1차 모멘트와 주요성분분석을 이용한 효율적인 얼굴표정 인식방법을 제안하였다. 여기서 1차 모멘트는 영상의 중심이동을 위한 전처리 과정으로 인식에 불필요한 배경의 배제와 계산시간의 감소로 인식성능을 개선하기 위함이다. 또한 주요성분분석은 얼굴표정의 특징인 고유영상을 추출하는 것으로, 이는 2차의 통계성을 고려한 중복신호의 제거로 인식성능을 개선하기 위함이다. 제안된 방법을 각각 320*243 픽셀의 48개(4명*6장*2그룹) 얼굴표정을 대상으로 Euclidean 분류척도를 이용하여 실험한 결과 전처리를 수행하지 않는 기존 방법보다 우수한 인식성능이 있음을 확인하였다.

  • PDF

Facial Expression Algorithm For Risk Situation Recognition (얼굴 표정인식을 이용한 위험상황 인지)

  • Kwak, Nae-jong;Song, Teuk-Seob
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.197-200
    • /
    • 2014
  • This paper proposes an algorithm for risk situation recognition using facial expression. The proposed method recognitions the surprise and fear expression among human's various emotional expression for recognizing risk situation. The proposed method firstly extracts the facial region from input, detects eye region and lip region from the extracted face. And then, the method applies Uniform LBP to each region, discriminates facial expression, and recognizes risk situation. The proposed method is evaluated for Cohn-Kanade database image. The proposed method produces good results of facial expression and discriminates risk situation well.

  • PDF

A Case Study on Face and Expression Recognition using AAMs and Multilinear Analysis (다선형 모델을 이용한 얼굴 및 표정 인식)

  • Park, Yong-Chan;Lee, Seong-Oh;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2008.07a
    • /
    • pp.1901-1902
    • /
    • 2008
  • 얼굴 인식은 얼굴의 특징적인 패턴을 이용하지만, 이러한 패턴은 표정, 포즈, 조명의 변화에 민감하여 인식에 어려움이 있다. 본 논문은 표정 변화에 강인한 인식 모델을 개발하기 위해 Cohn-Kanade 표정 데이터베이스와 AAM을 이용하여 다양한 데이터를 추출하였고, 추출된 데이터를 다선형 분석을 이용하여 분석하였다. 이를 적용한 인식 실험에서 PCA보다 표정에 좀 더 강인한 인식 성능을 나타내었다.

  • PDF

Realtime Facial Expression Control of 3D Avatar by Isomap of Motion Data (모션 데이터에 Isomap을 사용한 3차원 아바타의 실시간 표정 제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.3
    • /
    • pp.9-16
    • /
    • 2007
  • This paper describe methodology that is distributed on 2-dimensional plane to much high-dimensional facial motion datas using Isomap algorithm, and user interface techniques to control facial expressions by selecting expressions while user navigates this space in real-time. Isomap algorithm is processed of three steps as follow; first define an adjacency expression of each expression data, and second, calculate manifold distance between each expressions and composing expression spaces. These facial spaces are created by calculating of the shortest distance(manifold distance) between two random expressions. We have taken a Floyd algorithm for it. Third, materialize multi-dimensional expression spaces using Multidimensional Scaling, and project two dimensions plane. The smallest adjacency distance to define adjacency expressions uses Pearson Correlation Coefficient. Users can control facial expressions of 3-dimensional avatar by using user interface while they navigates two dimension spaces by real-time.