• Title/Summary/Keyword: 얼굴감정인식

Search Result 130, Processing Time 0.027 seconds

Real-time Recognition System of Facial Expressions Using Principal Component of Gabor-wavelet Features (표정별 가버 웨이블릿 주성분특징을 이용한 실시간 표정 인식 시스템)

  • Yoon, Hyun-Sup;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.821-827
    • /
    • 2009
  • Human emotion can be reflected by their facial expressions. So, it is one of good ways to understand people's emotions by recognizing their facial expressions. General recognition system of facial expressions had selected interesting points, and then only extracted features without analyzing physical meanings. They takes a long time to find interesting points, and it is hard to estimate accurate positions of these feature points. And in order to implement a recognition system of facial expressions on real-time embedded system, it is needed to simplify the algorithm and reduce the using resources. In this paper, we propose a real-time recognition algorithm of facial expressions that project the grid points on an expression space based on Gabor wavelet feature. Facial expression is simply described by feature vectors on the expression space, and is classified by an neural network with its resources dramatically reduced. The proposed system deals 5 expressions: anger, happiness, neutral, sadness, and surprise. In experiment, average execution time is 10.251 ms and recognition rate is measured as 87~93%.

21세기에 산다 - 사람을 닮아가는 기계들

  • Korean Federation of Science and Technology Societies
    • The Science & Technology
    • /
    • v.32 no.5 s.360
    • /
    • pp.28-29
    • /
    • 1999
  • 21세기에는 사람의 말을 알아듣고 말할 뿐 아니라 오감까지 갖춘 컴퓨터가 등장한다. 즉 사람의 감정을 헤아리고 얼굴을 인식하는 컴퓨터가 나와 자기 집 현관문 통과는 물론 자동금전출납기에도 활용된다. 21세기에는 이러한 첨단기술이 개인용 컴퓨터에서 승용차ㆍ주방기구에 이르는 모든 기계에까지 확산되어 인간의 일상적인 일은 모두 기계가 맡게 되고 인간은 자질구레한 일에서 해방이 되는 것이다.

  • PDF

Emotion Recognition Using Template Vector And Fuzzy Model (형판벡터와 퍼지모델을 이용한 감성인식)

  • Yoo, Tae-Il;Yu, Yeong-Seon;Joo, Young-Hun
    • Proceedings of the KIEE Conference
    • /
    • 2005.07d
    • /
    • pp.2894-2896
    • /
    • 2005
  • 본 논문에서는 퍼지모델을 기반으로 한 형판 벡터를 이용하여 인간의 감성을 인식할 수 있는 방법을 제안한다. 먼저 형판을 이용하여 얼굴 영상으로부터 얼굴의 특징(눈썹, 눈, 입)들을 추출한다. 추출한 형판으로부터 형판 벡터를 추출하고 이를 퍼지모델에 적용한다. 그 다음 감정에 따라 변하는 각각의 상태정보 이용하여 인간의 감성(놀람, 화남, 행복함, 슬픔)을 인식하는 방법을 제안한다. 마지막으로, 제안된 방법은 실험을 통해 그 응용 가능성을 확인한다.

  • PDF

Deep Neural Network Architecture for Video - based Facial Expression Recognition (동영상 기반 감정인식을 위한 DNN 구조)

  • Lee, Min Kyu;Choi, Jun Ho;Song, Byung Cheol
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.35-37
    • /
    • 2019
  • 최근 딥 러닝의 급격한 발전과 함께 얼굴표정인식 기술이 상당한 진보를 이루었다. 그러나 기존 얼굴표정인식 기법들은 제한된 환경에서 취득한 인위적인 동영상에 대해 주로 개발되었기 때문에 실제 wild 한 환경에서 취득한 동영상에 대해 강인하게 동작하지 않을 수 있다. 이런 문제를 해결하기 위해 3D CNN, 2D CNN 그리고 RNN 의 새로운 결합으로 이루어진 Deep neural network 구조를 제안한다. 제안 네트워크는 주어진 동영상으로부터 두 가지 서로 다른 CNN 을 통해서 영상 내 공간적 정보뿐만 아니라 시간적 정보를 담고 있는 특징 벡터를 추출할 수 있다. 그 다음, RNN 이 시간 도메인 학습을 수행할 뿐만 아니라 상기 네트워크들에서 추출된 특징 벡터들을 융합한다. 상기 기술들이 유기적으로 연동하는 제안된 네트워크는 대표적인 wild 한 공인 데이터세트인 AFEW 로 실험한 결과 49.6%의 정확도로 종래 기법 대비 향상된 성능을 보인다.

  • PDF

Multi-modal Emotion Recognition using Semi-supervised Learning and Multiple Neural Networks in the Wild (준 지도학습과 여러 개의 딥 뉴럴 네트워크를 사용한 멀티 모달 기반 감정 인식 알고리즘)

  • Kim, Dae Ha;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.351-360
    • /
    • 2018
  • Human emotion recognition is a research topic that is receiving continuous attention in computer vision and artificial intelligence domains. This paper proposes a method for classifying human emotions through multiple neural networks based on multi-modal signals which consist of image, landmark, and audio in a wild environment. The proposed method has the following features. First, the learning performance of the image-based network is greatly improved by employing both multi-task learning and semi-supervised learning using the spatio-temporal characteristic of videos. Second, a model for converting 1-dimensional (1D) landmark information of face into two-dimensional (2D) images, is newly proposed, and a CNN-LSTM network based on the model is proposed for better emotion recognition. Third, based on an observation that audio signals are often very effective for specific emotions, we propose an audio deep learning mechanism robust to the specific emotions. Finally, so-called emotion adaptive fusion is applied to enable synergy of multiple networks. The proposed network improves emotion classification performance by appropriately integrating existing supervised learning and semi-supervised learning networks. In the fifth attempt on the given test set in the EmotiW2017 challenge, the proposed method achieved a classification accuracy of 57.12%.

Senior Life Logging and Analysis by Using Deep Learning and Captured Multimedia Data (딥 러닝 기반의 API 와 멀티미디어 요소를 활용한 시니어 라이프 데이터 수집 및 상태 분석)

  • Kim, Seon Dae;Park, Eun Soo;Jeong, Jong Beom;Koo, Jaseong;Ryu, Eun-Seok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.244-247
    • /
    • 2018
  • 본 논문에서는 시니어를 위한 라이프 데이터 수집 및 행동분석 프레임 워크를 설명하고, 이의 부분적 구현을 자세히 설명한다. 본 연구는 시니어를 위한 라이프 데이터를 바탕으로 보호자가 없는 시니어를 보살핌과 동시에, 보호자가 미처 인지하지 못하는 시니어의 비정상적인 상태를 분석하여 판단하는 시스템을 연구한다. 먼저, 시니어가 시간을 많이 소요하는 TV 앞 상황을 가정하고, 방영되는 TV 콘텐츠와 TV 카메라를 이용한 시니어의 영상/음성 정보로 이상상태와 감정상태, TV 콘텐츠에 대한 반응과 반응속도를 체크한다. 구체적으로는 딥 러닝 기반의 API 와 멀티미디어 데이터 분석에서 사용되는 오픈 패키지를 바탕으로, 영상/음성의 키 프레임을 추출하여 감정 및 분위기를 분석하고 시니어의 얼굴 표정 인식, 행동 인식, 음성 인식을 수행한다.

  • PDF

Quantified Lockscreen: Integration of Personalized Facial Expression Detection and Mobile Lockscreen application for Emotion Mining and Quantified Self (Quantified Lockscreen: 감정 마이닝과 자기정량화를 위한 개인화된 표정인식 및 모바일 잠금화면 통합 어플리케이션)

  • Kim, Sung Sil;Park, Junsoo;Woo, Woontack
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1459-1466
    • /
    • 2015
  • Lockscreen is one of the most frequently encountered interfaces by smartphone users. Although users perform unlocking actions every day, there are no benefits in using lockscreens apart from security and authentication purposes. In this paper, we replace the traditional lockscreen with an application that analyzes facial expressions in order to collect facial expression data and provide real-time feedback to users. To evaluate this concept, we have implemented Quantified Lockscreen application, supporting the following contributions of this paper: 1) an unobtrusive interface for collecting facial expression data and evaluating emotional patterns, 2) an improvement in accuracy of facial expression detection through a personalized machine learning process, and 3) an enhancement of the validity of emotion data through bidirectional, multi-channel and multi-input methodology.

Development of Driver's Emotion and Attention Recognition System using Multi-modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 운전자의 감정 및 주의력 인식 기술 개발)

  • Han, Cheol-Hun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.6
    • /
    • pp.754-761
    • /
    • 2008
  • As the automobile industry and technologies are developed, driver's tend to more concern about service matters than mechanical matters. For this reason, interests about recognition of human knowledge and emotion to make safe and convenient driving environment for driver are increasing more and more. recognition of human knowledge and emotion are emotion engineering technology which has been studied since the late 1980s to provide people with human-friendly services. Emotion engineering technology analyzes people's emotion through their faces, voices and gestures, so if we use this technology for automobile, we can supply drivels with various kinds of service for each driver's situation and help them drive safely. Furthermore, we can prevent accidents which are caused by careless driving or dozing off while driving by recognizing driver's gestures. the purpose of this paper is to develop a system which can recognize states of driver's emotion and attention for safe driving. First of all, we detect a signals of driver's emotion by using bio-motion signals, sleepiness and attention, and then we build several types of databases. by analyzing this databases, we find some special features about drivers' emotion, sleepiness and attention, and fuse the results through Multi-Modal method so that it is possible to develop the system.

Stress Detection System for Emotional Labor Based On Deep Learning Facial Expression Recognition (감정노동자를 위한 딥러닝 기반의 스트레스 감지시스템의 설계)

  • Og, Yu-Seon;Cho, Woo-hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.613-617
    • /
    • 2021
  • According to the growth of the service industry, stresses from emotional labor workers have been emerging as a social problem, thereby so-called the Emotional Labor Protection Act was implemented in 2018. However, insufficient substantial protection systems for emotional workers emphasizes the necessity of a digital stress management system. Thus, in this paper, we suggest a stress detection system for customer service representatives based on deep learning facial expression recognition. This system consists of a real-time face detection module, an emotion classification FER module that deep-learned big data including Korean emotion images, and a monitoring module that only visualizes stress levels. We designed the system to aim to monitor stress and prevent mental illness in emotional workers.

  • PDF

Risk Situation Recognition Using Facial Expression Recognition of Fear and Surprise Expression (공포와 놀람 표정인식을 이용한 위험상황 인지)

  • Kwak, Nae-Jong;Song, Teuk Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.3
    • /
    • pp.523-528
    • /
    • 2015
  • This paper proposes an algorithm for risk situation recognition using facial expression. The proposed method recognitions the surprise and fear expression among human's various emotional expression for recognizing risk situation. The proposed method firstly extracts the facial region from input, detects eye region and lip region from the extracted face. And then, the method applies Uniform LBP to each region, discriminates facial expression, and recognizes risk situation. The proposed method is evaluated for Cohn-Kanade database image to recognize facial expression. The DB has 6 kinds of facial expressions of human being that are basic facial expressions such as smile, sadness, surprise, anger, disgust, and fear expression. The proposed method produces good results of facial expression and discriminates risk situation well.