• Title/Summary/Keyword: face expression recognition

Search Result 197, Processing Time 0.027 seconds

Person-Independent Facial Expression Recognition with Histograms of Prominent Edge Directions

  • Makhmudkhujaev, Farkhod;Iqbal, Md Tauhid Bin;Arefin, Md Rifat;Ryu, Byungyong;Chae, Oksam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.6000-6017
    • /
    • 2018
  • This paper presents a new descriptor, named Histograms of Prominent Edge Directions (HPED), for the recognition of facial expressions in a person-independent environment. In this paper, we raise the issue of sampling error in generating the code-histogram from spatial regions of the face image, as observed in the existing descriptors. HPED describes facial appearance changes based on the statistical distribution of the top two prominent edge directions (i.e., primary and secondary direction) captured over small spatial regions of the face. Compared to existing descriptors, HPED uses a smaller number of code-bins to describe the spatial regions, which helps avoid sampling error despite having fewer samples while preserving the valuable spatial information. In contrast to the existing Histogram of Oriented Gradients (HOG) that uses the histogram of the primary edge direction (i.e., gradient orientation) only, we additionally consider the histogram of the secondary edge direction, which provides more meaningful shape information related to the local texture. Experiments on popular facial expression datasets demonstrate the superior performance of the proposed HPED against existing descriptors in a person-independent environment.

Automatic Human Emotion Recognition from Speech and Face Display - A New Approach (인간의 언어와 얼굴 표정에 통하여 자동적으로 감정 인식 시스템 새로운 접근법)

  • Luong, Dinh Dong;Lee, Young-Koo;Lee, Sung-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06b
    • /
    • pp.231-234
    • /
    • 2011
  • Audiovisual-based human emotion recognition can be considered a good approach for multimodal humancomputer interaction. However, the optimal multimodal information fusion remains challenges. In order to overcome the limitations and bring robustness to the interface, we propose a framework of automatic human emotion recognition system from speech and face display. In this paper, we develop a new approach for fusing information in model-level based on the relationship between speech and face expression to detect automatic temporal segments and perform multimodal information fusion.

A Study of Evaluation System for Facial Expression Recognition based on LDP (LDP 기반의 얼굴 표정 인식 평가 시스템의 설계 및 구현)

  • Lee, Tae Hwan;Cho, Young Tak;Ahn, Yong Hak;Chae, Ok Sam
    • Convergence Security Journal
    • /
    • v.14 no.7
    • /
    • pp.23-28
    • /
    • 2014
  • This study proposes the design and implementation of the system for a facial expression recognition system. LDP(Local Directional Pattern) feature computes the edge response in a different direction from a pixel with the relationship of neighbor pixels. It is necessary to be estimated that LDP code can represent facial features correctly under various conditions. In this respect, we build the system of facial expression recognition to test LDP performance quickly and the proposed evaluation system consists of six components. we experiment the recognition rate with local micro patterns (LDP, Gabor, LBP) in the proposed evaluation system.

Realistic 3-dimensional using computer graphics Expression of Human illustrations (컴퓨터그래픽스를 이용한 사실적인 3D 인물 일러스트레이션의 표현)

  • Kim, Hoon
    • Archives of design research
    • /
    • v.19 no.1 s.63
    • /
    • pp.79-88
    • /
    • 2006
  • A human face figure is a visual symbol of identity. Each different face per person is a critical information differentiating each person from others and it directly relates to individual identity. When we look back human history, historical change of recognition for a face led to the change of expression and communication media and it in turn caused many changes in expressing a face. However, there has not been no time period when people pay attention to a face more than this time. Technically, the advent of computer graphics opened new turning point in expressing human face figure. Especially, a visual image which can be produced, saved, and transferred in digital has no limitation in time and space, and its importance in communication is getting higher and higher. Among those visual image information, a face image in digital is getting more applications. Therefore, 3d (3-dimensional) expression of a face using computer graphics can be easily produced without any professional techniques, just like assembling puzzle parts composed of the shape of each part ands texture map, etc. This study presents a method with which a general visual designer can effectively express 3d type face by studying each producing step of 3d face expression and by visualizing case study based on the above-mentioned study result.

  • PDF

Face Expression Recognition Network for UAV and Mobile Device (UAV 및 모바일 기기를 위한 얼굴 표정 인식 네트워크)

  • Choi, Eunji;Park, Byeongjun;Yoon, Kyoungro
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.348-351
    • /
    • 2021
  • 최근 자동화의 필요성이 증가함에 따라 얼굴 표정 인식 분야(face expression recognition)가 인공지능과 이미지 처리 분야에서 활발히 연구되고 있다. 본 논문에서는 기존 인공신경망에서 요구되었던 고성능 GPU 환경과 높은 연산량을 극복하고자 모델 경량화(Light weighted Model) 기법을 적용하여 드론 및 모바일 기기에서 적용될 수 있는 얼굴 표정 인식 신경망을 제안한다. 제안하는 방법은 미세한 얼굴의 표정 인식을 위한 방법으로, 입력 이미지의 receptive field 를 늘려 특징 맵의 표현력을 높이는 방법을 제안한다. 또한 효과적인 신경망의 경량화를 위하여, 파라미터의 연산량을 줄일 때 발생하는 문제점을 극복하기 위한 방법을 제시한다. 따라서 제안하는 네트워크를 적용하면 많은 연산량과 느린 연산속도로 인해 제한되었던 네트워크 환경을 극복할 수 있을 뿐만 아니라, UAV(Unmanned Aerial Vehicle, 무인항공기) 및 모바일 기기에서 신경망을 이용한 실시간 얼굴 표정 인식을 할 수 있다.

  • PDF

Face Recognition Evaluation of an Illumination Property of Subspace Based Feature Extractor (부분공간 기반 특징 추출기의 조명 변인에 대한 얼굴인식 성능 분석)

  • Kim, Kwang-Soo;Boo, Deok-Hee;Ahn, Jung-Ho;Kwak, Soo-Yeong;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.681-687
    • /
    • 2007
  • Face recognition technique is very popular for a personal information security and user identification in recent years. However, the face recognition system is very hard to be implemented due to the difficulty where change in illumination, pose and facial expression. In this paper, we consider that an illumination change causing the variety of face appearance, virtual image data is generated and added to the D-LDA which was selected as the most suitable feature extractor. A less sensitive recognition system in illumination is represented in this paper. This way that consider nature of several illumination directions generate the virtual training image data that considered an illumination effect of the directions and the change of illumination density. As result of experiences, D-LDA has a less sensitive property in an illumination through ORL, Yale University and Pohang University face database.

Emotional Interface Technologies for Service Robot (서비스 로봇을 위한 감성인터페이스 기술)

  • Yang, Hyun-Seung;Seo, Yong-Ho;Jeong, Il-Woong;Han, Tae-Woo;Rho, Dong-Hyun
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.1
    • /
    • pp.58-65
    • /
    • 2006
  • The emotional interface is essential technology for the robot to provide the proper service to the user. In this research, we developed emotional components for the service robot such as a neural network based facial expression recognizer, emotion expression technologies based on 3D graphical face expression and joints movements, considering a user's reaction, behavior selection technology for emotion expression. We used our humanoid robots, AMI and AMIET as the test-beds of our emotional interface. We researched on the emotional interaction between a service robot and a user by integrating the developed technologies. Emotional interface technology for the service robot, enhance the performance of friendly interaction to the service robot, to increase the diversity of the service and the value-added of the robot for human. and it elevates the market growth and also contribute to the popularization of the robot. The emotional interface technology can enhance the performance of friendly interaction of the service robot. This technology can also increase the diversity of the service and the value-added of the robot for human. and it can elevate the market growth and also contribute to the popularization of the robot.

  • PDF

Song Player by Distance Measurement from Face (얼굴에서 거리 측정에 의한 노래 플레이어)

  • Shin, Seong-Yoon;Lee, Min-Hye;Shin, Kwang-Seong;Lee, Hyun-Chang
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.667-669
    • /
    • 2022
  • In this paper, Face Song Player, which is a system that recognizes the facial expression of an individual and plays music that is appropriate for such person, is presented. It studies information on the facial contour lines and extracts an average, and acquires the facial shape information. MUCT DB was used as the DB for learning. For the recognition of facial expression, an algorithm was designed by using the differences in the characteristics of each of the expressions on the basis of expressionless images.

  • PDF

Face Detection based on Matched Filtering with Mobile Device (모바일 기기를 이용한 정합필터 기반의 얼굴 검출)

  • Yeom, Seok-Won;Lee, Dong-Su
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.15 no.3
    • /
    • pp.76-79
    • /
    • 2014
  • Face recognition is very challenging because of the unexpected changes of pose, expression, and illumination. Facial detection in the mobile environments has additional difficulty since the computational resources are very limited. This paper discusses face detection based on frequency domain matched filtering in the mobile environments. Face detection is performed by a linear or phase-only matched filter and sequential verification stages. The candidate window regions are selected by a number of peaks of the matched filtering outputs. The sequential stages comprise a skin-color test and an edge mask filtering tests, which aim to remove false alarms among selected candidate windows. The algorithms are built with JAVA language on the mobile device operated by the Android platform. The simulation and experimental results show that real-time face detection can be performed successfully in the mobile environments.

Local Similarity based Discriminant Analysis for Face Recognition

  • Xiang, Xinguang;Liu, Fan;Bi, Ye;Wang, Yanfang;Tang, Jinhui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.11
    • /
    • pp.4502-4518
    • /
    • 2015
  • Fisher linear discriminant analysis (LDA) is one of the most popular projection techniques for feature extraction and has been widely applied in face recognition. However, it cannot be used when encountering the single sample per person problem (SSPP) because the intra-class variations cannot be evaluated. In this paper, we propose a novel method called local similarity based linear discriminant analysis (LS_LDA) to solve this problem. Motivated by the "divide-conquer" strategy, we first divide the face into local blocks, and classify each local block, and then integrate all the classification results to make final decision. To make LDA feasible for SSPP problem, we further divide each block into overlapped patches and assume that these patches are from the same class. To improve the robustness of LS_LDA to outliers, we further propose local similarity based median discriminant analysis (LS_MDA), which uses class median vector to estimate the class population mean in LDA modeling. Experimental results on three popular databases show that our methods not only generalize well SSPP problem but also have strong robustness to expression, illumination, occlusion and time variation.