• Title/Summary/Keyword: 로봇표정

Search Result 54, Processing Time 0.025 seconds

Deep Reinforcement Learning-Based Cooperative Robot Using Facial Feedback (표정 피드백을 이용한 딥강화학습 기반 협력로봇 개발)

  • Jeon, Haein;Kang, Jeonghun;Kang, Bo-Yeong
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.264-272
    • /
    • 2022
  • Human-robot cooperative tasks are increasingly required in our daily life with the development of robotics and artificial intelligence technology. Interactive reinforcement learning strategies suggest that robots learn task by receiving feedback from an experienced human trainer during a training process. However, most of the previous studies on Interactive reinforcement learning have required an extra feedback input device such as a mouse or keyboard in addition to robot itself, and the scenario where a robot can interactively learn a task with human have been also limited to virtual environment. To solve these limitations, this paper studies training strategies of robot that learn table balancing tasks interactively using deep reinforcement learning with human's facial expression feedback. In the proposed system, the robot learns a cooperative table balancing task using Deep Q-Network (DQN), which is a deep reinforcement learning technique, with human facial emotion expression feedback. As a result of the experiment, the proposed system achieved a high optimal policy convergence rate of up to 83.3% in training and successful assumption rate of up to 91.6% in testing, showing improved performance compared to the model without human facial expression feedback.

Analysis of Customer Evaluations on the Ethical Response to Service Failures of Foodtech Serving Robots (푸드테크 서빙로봇의 서비스 실패에 대한 직업윤리적 대응에 대한 고객 평가 분석)

  • Han, Jeonghye;Choi, Younglim;Jeong, Sanghyun;Kim, Jong-Wook
    • Journal of Service Research and Studies
    • /
    • v.14 no.1
    • /
    • pp.1-12
    • /
    • 2024
  • As the service robot market grows among the food technology industry, the quality of robot service that affects consumer behavioral intentions in the restaurant industry has become important. Serving robots, which are common in restaurants, reduce employee work through order and delivery, but because they do not respond to service failures, they increase customer dissatisfaction as well as increase employee work. In order to improve the quality of service beyond the simple function of receiving and serving orders, functions of recovery effort, fairness, empathy, responsiveness, and certainty of the process after service failure, such as serving employees, are also required. Accordingly, we assumed the type of failure of restaurant serving service as two internal and external factors, and developed a serving robot with a vocational ethics module to respond with a professional ethical attitude when the restaurant serving service fails. At this time, the expression and action of the serving robot were developed by adding a failure mode reflecting failure recovery efforts and empathy to the normal service mode. And by recruiting college students, we tested whether the service robot's response to two types of service failures had a significant effect on evaluating the robot. Participants responded that they were more uncomfortable with service failures caused by other customers' mistakes than robot mistakes, and that the serving robot's professional ethical empathy and response were appropriate. In addition, unlike the robot's favorability, the evaluation of the safety of the robot had a significant difference depending on whether or not a professional ethical empathy module was installed. A professional ethical empathy response module for natural service failure recovery using generative artificial intelligence should be developed and mounted, and the domestic serving robot industry and market are expected to grow more rapidly if the Korean serving robot certification system is introduced.

A Study on Emotion Recognition Systems based on the Probabilistic Relational Model Between Facial Expressions and Physiological Responses (생리적 내재반응 및 얼굴표정 간 확률 관계 모델 기반의 감정인식 시스템에 관한 연구)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.6
    • /
    • pp.513-519
    • /
    • 2013
  • The current vision-based approaches for emotion recognition, such as facial expression analysis, have many technical limitations in real circumstances, and are not suitable for applications that use them solely in practical environments. In this paper, we propose an approach for emotion recognition by combining extrinsic representations and intrinsic activities among the natural responses of humans which are given specific imuli for inducing emotional states. The intrinsic activities can be used to compensate the uncertainty of extrinsic representations of emotional states. This combination is done by using PRMs (Probabilistic Relational Models) which are extent version of bayesian networks and are learned by greedy-search algorithms and expectation-maximization algorithms. Previous research of facial expression-related extrinsic emotion features and physiological signal-based intrinsic emotion features are combined into the attributes of the PRMs in the emotion recognition domain. The maximum likelihood estimation with the given dependency structure and estimated parameter set is used to classify the label of the target emotional states.

Intelligent Countenance Robot, Humanoid ICHR (지능형 표정로봇, 휴머노이드 ICHR)

  • Byun, Sang-Zoon
    • Proceedings of the KIEE Conference
    • /
    • 2006.10b
    • /
    • pp.175-180
    • /
    • 2006
  • In this paper, we develope a type of humanoid robot which can express its emotion against human actions. To interact with human, the developed robot has several abilities to express its emotion, which are verbal communication with human through voice/image recognition, motion tracking, and facial expression using fourteen Servo Motors. The proposed humanoid robot system consists of a control board designed with AVR90S8535 to control servor motors, a framework equipped with fourteen server motors and two CCD cameras, a personal computer to monitor its operations. The results of this research illustrate that our intelligent emotional humanoid robot is very intuitive and friendly so human can interact with the robot very easily.

  • PDF

Implementation of Facial Robot 3D Simulator For Dynamic Facial Expression (동적 표정 구현이 가능한 얼굴 로봇 3D 시뮬레이터 구현)

  • Kang, Byung-Kon;Kang, Hyo-Seok;Kim, Eun-Tai;Park, Mig-Non
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.1121-1122
    • /
    • 2008
  • By using FACS(Facial Action Coding System) and linear interpolation, a 3D facial robot simulator is developed in this paper. This simulator is based on real facial robot and synchronizes with it by unifying protocol. Using AUs(Action Unit) of each 5 basic expressions and linear interpolation makes more various dynamic facial expressions.

  • PDF

A Preliminary Study for Emotional Expression of Software Robot -Development of Hangul Processing Technique for Inference of Emotional Words- (소프트웨어 로봇의 감성 표현을 위한 기반연구 - 감성어 추론을 위한 한글 처리 기술 개발 -)

  • Song, Bok-Hee;Yun, Han-Kyung
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2012.05a
    • /
    • pp.3-4
    • /
    • 2012
  • 사용자 중심의 man machine interface 기술의 발전은 사용자 인터페이스 기술과 인간공학의 접목으로 인하여 많은 진전이 있으며 계속 진행되고 있다. 근래의 정보전달은 사운드와 텍스트 또는 영상을 통하여 이루어지고 있으나, 감성적인 측면에서의 정보전달에 관한 연구는 활발하지 못한 실정이다. 특히, Human Computer Interaction분야에서 음성이나 표정의 전달에 관한 감성연구는 초기단계로 이모티콘이나 플래쉬콘 등이 감정전달을 위하여 사용되고 있으나 부자연스럽고 기계적인 실정이다. 본 연구는 사용자와 상호작용에서 컴퓨터 또는 응용소프트웨어 등이 자신의 가상객체(Software Robot, Sobot)를 활용하여 인간친화적인 상호작용을 제공하기위한 기반연구로써 한글에서 감성어를 추출하여 분류하고 처리하는 기술을 개발하여 컴퓨터가 전달하고자하는 정보에 인공감정을 이입시켜 사용자들의 감성만족도를 향상시키는데 적용하고자한다.

  • PDF

A Study on The Expression of Digital Eye Contents for Emotional Communication (감성 커뮤니케이션을 위한 디지털 눈 콘텐츠 표현 연구)

  • Lim, Yoon-Ah;Lee, Eun-Ah;Kwon, Jieun
    • Journal of Digital Convergence
    • /
    • v.15 no.12
    • /
    • pp.563-571
    • /
    • 2017
  • The purpose of this paper is to establish an emotional expression factors of digital eye contents that can be applied to digital environments. The emotion which can be applied to the smart doll is derived and we suggest guidelines for expressive factors of each emotion. For this paper, first, we research the concepts and characteristics of emotional expression are shown in eyes by the publications, animation and actual video. Second, we identified six emotions -Happy, Angry, Sad, Relaxed, Sexy, Pure- and extracted the emotional expression factors. Third, we analyzed the extracted factors to establish guideline for emotional expression of digital eyes. As a result, this study found that the factors to distinguish and represent each emotion are classified four categories as eye shape, gaze, iris size and effect. These can be used as a way to enhance emotional communication effects such as digital contents including animations, robots and smart toys.

Sliding Active Camera-based Face Pose Compensation for Enhanced Face Recognition (얼굴 인식률 개선을 위한 선형이동 능동카메라 시스템기반 얼굴포즈 보정 기술)

  • 장승호;김영욱;박창우;박장한;남궁재찬;백준기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.155-164
    • /
    • 2004
  • Recently, we have remarkable developments in intelligent robot systems. The remarkable features of intelligent robot are that it can track user and is able to doface recognition, which is vital for many surveillance-based systems. The advantage of face recognition compared with other biometrics recognition is that coerciveness and contact that usually exist when we acquire characteristics do not exist in face recognition. However, the accuracy of face recognition is lower than other biometric recognition due to the decreasing in dimension from image acquisition step and various changes associated with face pose and background. There are many factors that deteriorate performance of face recognition such as thedistance from camera to the face, changes in lighting, pose change, and change of facial expression. In this paper, we implement a new sliding active camera system to prevent various pose variation that influence face recognition performance andacquired frontal face images using PCA and HMM method to improve the face recognition. This proposed face recognition algorithm can be used for intelligent surveillance system and mobile robot system.

The Development of Robot and Augmented Reality Based Contents and Instructional Model Supporting Childrens' Dramatic Play (로봇과 증강현실 기반의 유아 극놀이 콘텐츠 및 교수.학습 모형 개발)

  • Jo, Miheon;Han, Jeonghye;Hyun, Eunja
    • Journal of The Korean Association of Information Education
    • /
    • v.17 no.4
    • /
    • pp.421-432
    • /
    • 2013
  • The purpose of this study is to develop contents and an instructional model that support children's dramatic play by integrating the robot and augmented reality technology. In order to support the dramatic play, the robot shows various facial expressions and actions, serves as a narrator and a sound manager, supports the simultaneous interaction by using the camera and recognizing the markers and children's motions, records children's activities as a photo and a video that can be used for further activities. The robot also uses a projector to allow children to directly interact with the video object. On the other hand, augmented reality offers a variety of character changes and props, and allows various effects of background and foreground. Also it allows natural interaction between the contents and children through the real-type interface, and provides the opportunities for the interaction between actors and audiences. Along with these, augmented reality provides an experience-based learning environment that induces a sensory immersion by allowing children to manipulate or choose the learning situation and experience the results. In addition, the instructional model supporting dramatic play consists of 4 stages(i.e., teachers' preparation, introducing and understanding a story, action plan and play, evaluation and wrapping up). At each stage, detailed activities to decide or proceed are suggested.

The GEO-Localization of a Mobile Mapping System (모바일 매핑 시스템의 GEO 로컬라이제이션)

  • Chon, Jae-Choon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.5
    • /
    • pp.555-563
    • /
    • 2009
  • When a mobile mapping system or a robot is equipped with only a GPS (Global Positioning System) and multiple stereo camera system, a transformation from a local camera coordinate system to GPS coordinate system is required to link camera poses and 3D data by V-SLAM (Vision based Simultaneous Localization And Mapping) to GIS data or remove the accumulation error of those camera poses. In order to satisfy the requirements, this paper proposed a novel method that calculates a camera rotation in the GPS coordinate system using the three pairs of camera positions by GPS and V-SLAM, respectively. The propose method is composed of four simple steps; 1) calculate a quaternion for two plane's normal vectors based on each three camera positions to be parallel, 2) transfer the three camera positions by V-SLAM with the calculated quaternion 3) calculate an additional quaternion for mapping the second or third point among the transferred positions to a camera position by GPS, and 4) determine a final quaternion by multiplying the two quaternions. The final quaternion can directly transfer from a local camera coordinate system to the GPS coordinate system. Additionally, an update of the 3D data of captured objects based on view angles from the object to cameras is proposed. This paper demonstrated the proposed method through a simulation and an experiment.