• Title/Summary/Keyword: facial robot

Search Result 84, Processing Time 0.03 seconds

Development of Facial Expression Recognition System based on Bayesian Network using FACS and AAM (FACS와 AAM을 이용한 Bayesian Network 기반 얼굴 표정 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.562-567
    • /
    • 2009
  • As a key mechanism of the human emotion interaction, Facial Expression is a powerful tools in HRI(Human Robot Interface) such as Human Computer Interface. By using a facial expression, we can bring out various reaction correspond to emotional state of user in HCI(Human Computer Interaction). Also it can infer that suitable services to supply user from service agents such as intelligent robot. In this article, We addresses the issue of expressive face modeling using an advanced active appearance model for facial emotion recognition. We consider the six universal emotional categories that are defined by Ekman. In human face, emotions are most widely represented with eyes and mouth expression. If we want to recognize the human's emotion from this facial image, we need to extract feature points such as Action Unit(AU) of Ekman. Active Appearance Model (AAM) is one of the commonly used methods for facial feature extraction and it can be applied to construct AU. Regarding the traditional AAM depends on the setting of the initial parameters of the model and this paper introduces a facial emotion recognizing method based on which is combined Advanced AAM with Bayesian Network. Firstly, we obtain the reconstructive parameters of the new gray-scale image by sample-based learning and use them to reconstruct the shape and texture of the new image and calculate the initial parameters of the AAM by the reconstructed facial model. Then reduce the distance error between the model and the target contour by adjusting the parameters of the model. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotion by using Bayesian Network.

Facial Expression Explorer for Realistic Character Animation

  • Ko, Hee-Dong;Park, Moon-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.16.1-164
    • /
    • 1998
  • This paper describes Facial Expression Explorer to search for the components of a facial expression and to map the expression to other expressionless figures like a robot, frog, teapot, rabbit and others. In general, it is a time-consuming and laborious job to create a facial expression manually, especially when the facial expression must personify a well-known public figure or an actor. In order to extract a blending ratio from facial images automatically, the Facial Expression Explorer uses Networked Genetic Algorithm(NGA) which is a fast method for the convergence by GA. The blending ratio is often used to create facial expressions through shape blending methods by animators. With the Facial Expression Explorer a realistic facial expression can be modeled more efficiently.

Artificial Intelligence for Assistance of Facial Expression Practice Using Emotion Classification (감정 분류를 이용한 표정 연습 보조 인공지능)

  • Dong-Kyu, Kim;So Hwa, Lee;Jae Hwan, Bong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1137-1144
    • /
    • 2022
  • In this study, an artificial intelligence(AI) was developed to help with facial expression practice in order to express emotions. The developed AI used multimodal inputs consisting of sentences and facial images for deep neural networks (DNNs). The DNNs calculated similarities between the emotions predicted by the sentences and the emotions predicted by facial images. The user practiced facial expressions based on the situation given by sentences, and the AI provided the user with numerical feedback based on the similarity between the emotion predicted by sentence and the emotion predicted by facial expression. ResNet34 structure was trained on FER2013 public data to predict emotions from facial images. To predict emotions in sentences, KoBERT model was trained in transfer learning manner using the conversational speech dataset for emotion classification opened to the public by AIHub. The DNN that predicts emotions from the facial images demonstrated 65% accuracy, which is comparable to human emotional classification ability. The DNN that predicts emotions from the sentences achieved 90% accuracy. The performance of the developed AI was evaluated through experiments with changing facial expressions in which an ordinary person was participated.

A Study on Non-Contact Care Robot System through Deep Learning

  • Hyun-Sik Ham;Sae Jun Ko
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.33-40
    • /
    • 2023
  • As South Korea enters the realm of an super-aging society, the demand for elderly welfare services has been steadily rising. However, the current shortage of welfare personnel has emerged as a social issue. To address this challenge, there is active research underway on elderly care robots designed to mitigate the social isolation of the elderly and provide emergency contact capabilities in critical situations. Nonetheless, these functionalities require direct user contact, which represents a limitation of conventional elderly care robots. In this paper, we propose a solution to overcome these challenges by introducing a care robot system capable of interacting with users without the need for direct physical contact. This system leverages commercialized elderly care robots and cameras. We have equipped the care robot with an edge device that incorporates facial expression recognition and action recognition models. The models were trained and validated using public available data. Experimental results demonstrate high accuracy rates, with facial expression recognition achieving 96.5% accuracy and action recognition reaching 90.9%. Furthermore, the inference times for these processes are 50ms and 350ms, respectively. These findings affirm that our proposed system offers efficient and accurate facial and action recognition, enabling seamless interaction even in non-contact situations.

Face Detection and Recognition with Multiple Appearance Models for Mobile Robot Application

  • Lee, Taigun;Park, Sung-Kee;Kim, Munsang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.100.4-100
    • /
    • 2002
  • For visual navigation, mobile robot can use a stereo camera which has large field of view. In this paper, we propose an algorithm to detect and recognize human face on the basis of such camera system. In this paper, a new coarse to fine detection algorithm is proposed. For coarse detection, nearly face-like areas are found in entire image using dual ellipse templates. And, detailed alignment of facial outline and features is performed on the basis of view- based multiple appearance model. Because it hard to finely align with facial features in this case, we try to find most resembled face image area is selected from multiple face appearances using most distinguished facial features- two eye...

  • PDF

Hybrid Facial Representations for Emotion Recognition

  • Yun, Woo-Han;Kim, DoHyung;Park, Chankyu;Kim, Jaehong
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.1021-1028
    • /
    • 2013
  • Automatic facial expression recognition is a widely studied problem in computer vision and human-robot interaction. There has been a range of studies for representing facial descriptors for facial expression recognition. Some prominent descriptors were presented in the first facial expression recognition and analysis challenge (FERA2011). In that competition, the Local Gabor Binary Pattern Histogram Sequence descriptor showed the most powerful description capability. In this paper, we introduce hybrid facial representations for facial expression recognition, which have more powerful description capability with lower dimensionality. Our descriptors consist of a block-based descriptor and a pixel-based descriptor. The block-based descriptor represents the micro-orientation and micro-geometric structure information. The pixel-based descriptor represents texture information. We validate our descriptors on two public databases, and the results show that our descriptors perform well with a relatively low dimensionality.

Improving the Processing Speed and Robustness of Face Detection for a Psychological Robot Application (심리로봇적용을 위한 얼굴 영역 처리 속도 향상 및 강인한 얼굴 검출 방법)

  • Ryu, Jeong Tak;Yang, Jeen Mo;Choi, Young Sook;Park, Se Hyun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.20 no.2
    • /
    • pp.57-63
    • /
    • 2015
  • Compared to other emotion recognition technology, facial expression recognition technology has the merit of non-contact, non-enforceable and convenience. In order to apply to a psychological robot, vision technology must be able to quickly and accurately extract the face region in the previous step of facial expression recognition. In this paper, we remove the background from any image using the YCbCr skin color technology, and use Haar-like Feature technology for robust face detection. We got the result of improved processing speed and robust face detection by removing the background from the input image.

Energy-Efficient DNN Processor on Embedded Systems for Spontaneous Human-Robot Interaction

  • Kim, Changhyeon;Yoo, Hoi-Jun
    • Journal of Semiconductor Engineering
    • /
    • v.2 no.2
    • /
    • pp.130-135
    • /
    • 2021
  • Recently, deep neural networks (DNNs) are actively used for action control so that an autonomous system, such as the robot, can perform human-like behaviors and operations. Unlike recognition tasks, the real-time operation is essential in action control, and it is too slow to use remote learning on a server communicating through a network. New learning techniques, such as reinforcement learning (RL), are needed to determine and select the correct robot behavior locally. In this paper, we propose an energy-efficient DNN processor with a LUT-based processing engine and near-zero skipper. A CNN-based facial emotion recognition and an RNN-based emotional dialogue generation model is integrated for natural HRI system and tested with the proposed processor. It supports 1b to 16b variable weight bit precision with and 57.6% and 28.5% lower energy consumption than conventional MAC arithmetic units for 1b and 16b weight precision. Also, the near-zero skipper reduces 36% of MAC operation and consumes 28% lower energy consumption for facial emotion recognition tasks. Implemented in 65nm CMOS process, the proposed processor occupies 1784×1784 um2 areas and dissipates 0.28 mW and 34.4 mW at 1fps and 30fps facial emotion recognition tasks.

A Portable Mediate Interface 'Handybot' for the Rich Human-Robot Interaction (인관과 로봇의 다양한 상호작용을 위한 휴대 매개인터페이스 ‘핸디밧’)

  • Hwang, Jung-Hoon;Kwon, Dong-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.735-742
    • /
    • 2007
  • The importance of the interaction capability of a robot increases as the application of a robot is extended to a human's daily life. In this paper, a portable mediate interface Handybot is developed with various interaction channels to be used with an intelligent home service robot. The Handybot has a task-oriented channel of an icon language as well as a verbal interface. It also has an emotional interaction channel that recognizes a user's emotional state from facial expression and speech, transmits that state to the robot, and expresses the robot's emotional state to the user. It is expected that the Handybot will reduce spatial problems that may exist in human-robot interactions, propose a new interaction method, and help creating rich and continuous interactions between human users and robots.

Study on Facial Expression Factors as Emotional Interaction Design Factors (감성적 인터랙션 디자인 요소로서의 표정 요소에 관한 연구)

  • Heo, Seong-Cheol
    • Science of Emotion and Sensibility
    • /
    • v.17 no.4
    • /
    • pp.61-70
    • /
    • 2014
  • Verbal communication has limits in the interaction between robot and man, and therefore nonverbal communication is required for realizing smoother and more efficient communication and even the emotional expression of the robot. This study derived 7 pieces of nonverbal information based on shopping behavior using the robot designed to support shopping, selected facial expression as the element of the nonverbal information derived, and coded face components through 2D analysis. Also, this study analyzed the significance of the expression of nonverbal information using 3D animation that combines the codes of face components. The analysis showed that the proposed expression method for nonverbal information manifested high level of significance, suggesting the potential of this study as the base line data for the research on nonverbal information. However, the case of 'embarrassment' showed limits in applying the coded face components to shape and requires more systematic studies.