• Title/Summary/Keyword: Human-Robot Interaction (HRI)

Search Result 78, Processing Time 0.028 seconds

Generation of Robot Facial Gestures based on Facial Actions and Animation Principles (Facial Actions 과 애니메이션 원리에 기반한 로봇의 얼굴 제스처 생성)

  • Park, Jeong Woo;Kim, Woo Hyun;Lee, Won Hyong;Lee, Hui Sung;Chung, Myung Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.5
    • /
    • pp.495-502
    • /
    • 2014
  • This paper proposes a method to generate diverse robot facial expressions and facial gestures in order to help long-term HRI. First, nine basic dynamics for diverse robot facial expressions are determined based on the dynamics of human facial expressions and principles of animation for even identical emotions. In the second stage, facial actions are added to express facial gestures such as sniffling or wailing loudly corresponding to sadness, laughing aloud or smiling corresponding to happiness, etc. To evaluate the effectiveness of our approach, we compared the facial expressions of the developed robot when the proposed method is used or not. The results of the survey showed that the proposed method can help robots generate more realistic facial expressions.

A study on the increase of user gesture recognition rate using data preprocessing (데이터 전처리를 통한 사용자 제스처 인식률 증가 방안)

  • Kim, Jun Heon;Song, Byung Hoo;Shin, Dong Ryoul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2017.07a
    • /
    • pp.13-16
    • /
    • 2017
  • 제스처 인식은 HCI(Human-Computer Interaction) 및 HRI(Human-Robot Interaction) 분야에서 활발히 연구되고 있는 기술이며, 제스처 데이터의 특징을 추출해내고 그에 따른 분류를 통하여 사용자의 제스처를 정확히 판별하는 것이 중요한 과제로 자리 잡았다. 본 논문에서는 EMG(Electromyography) 센서로 측정한 사용자의 손 제스처 데이터를 분석하는 방안에 대하여 서술한다. 수집된 데이터의 노이즈를 제거하고 데이터의 특징을 극대화시키기 위하여 연속적인 데이터로 변환하는 전처리 과정을 거쳐 이를 머신 러닝 알고리즘을 사용하여 분류하였다. 이 때, 기존의 raw 데이터와 전처리 과정을 거친 데이터의 성능을 decision-tree 알고리즘을 통하여 비교하였다.

  • PDF

Vision Based Sensor Fusion System of Biped Walking Robot for Environment Recognition (영상 기반 센서 융합을 이용한 이쪽로봇에서의 환경 인식 시스템의 개발)

  • Song, Hee-Jun;Lee, Seon-Gu;Kang, Tae-Gu;Kim, Dong-Won;Seo, Sam-Jun;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.123-125
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tole-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

A Study on the Environment Recognition System of Biped Robot for Stable Walking (안정적 보행을 위한 이족 로봇의 환경 인식 시스템 연구)

  • Song, Hee-Jun;Lee, Seon-Gu;Kang, Tae-Gu;Kim, Dong-Won;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.07d
    • /
    • pp.1977-1978
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tele-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

Speech Emotion Recognition Using Confidence Level for Emotional Interaction Robot (감정 상호작용 로봇을 위한 신뢰도 평가를 이용한 화자독립 감정인식)

  • Kim, Eun-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.755-759
    • /
    • 2009
  • The ability to recognize human emotion is one of the hallmarks of human-robot interaction. Especially, speaker-independent emotion recognition is a challenging issue for commercial use of speech emotion recognition systems. In general, speaker-independent systems show a lower accuracy rate compared with speaker-dependent systems, as emotional feature values depend on the speaker and his/her gender. Hence, this paper describes the realization of speaker-independent emotion recognition by rejection using confidence measure to make the emotion recognition system be homogeneous and accurate. From comparison of the proposed methods with conventional method, the improvement and effectiveness of proposed methods were clearly confirmed.

Comparison of EEG Topography Labeling and Annotation Labeling Techniques for EEG-based Emotion Recognition (EEG 기반 감정인식을 위한 주석 레이블링과 EEG Topography 레이블링 기법의 비교 고찰)

  • Ryu, Je-Woo;Hwang, Woo-Hyun;Kim, Deok-Hwan
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.3
    • /
    • pp.16-24
    • /
    • 2019
  • Recently, research on emotion recognition based on EEG has attracted great interest from human-robot interaction field. In this paper, we propose a method of labeling using image-based EEG topography instead of evaluating emotions through self-assessment and annotation labeling methods used in MAHNOB HCI. The proposed method evaluates the emotion by machine learning model that learned EEG signal transformed into topographical image. In the experiments using MAHNOB-HCI database, we compared the performance of training EEG topography labeling models of SVM and kNN. The accuracy of the proposed method was 54.2% in SVM and 57.7% in kNN.

Hybrid Silhouette Extraction Using Color and Gradient Informations (색상 및 기울기 정보를 이용한 인간 실루엣 추출)

  • Joo, Young-Hoon;So, Jea-Yun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.7
    • /
    • pp.913-918
    • /
    • 2007
  • Human motion analysis is an important research subject in human-robot interaction (HRI). However, before analyzing the human motion, silhouette of human body should be extracted from sequential images obtained by CCD camera. The intelligent robot system requires more robust silhouette extraction method because it has internal vibration and low resolution. In this paper, we discuss the hybrid silhouette extraction method for detecting and tracking the human motion. The proposed method is to combine and optimize the temporal and spatial gradient information. Also, we propose some compensation methods so as not to miss silhouette information due to poor images. Finally, we have shown the effectiveness and feasibility of the proposed method through some experiments.

A Study on Face Recognition Performance Comparison of Real Images with Images from LED Monitor (LED 모니터 출력 영상과 실물 영상의 얼굴인식 성능 비교)

  • Cho, Mi-Young;Jeong, Young-Sook;Chun, Byung-Tae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.144-149
    • /
    • 2013
  • With the increasing of service robots, human-robot interaction for natural communication between user and robots is becoming more and more important. Especially, face recognition is a key issue of HRI. Even though robots mainly use face detection and recognition to provide various services, it is still difficult to guarantee of performance due to insufficient test methods in real service environment. Now, face recognition performance of most robots is evaluation for engine without consideration for robots. In this paper, we show validity of test method using LED monitor through performance comparison of real images with from images LED monitor.

Mixed-Initiative Interaction between Human and Service Robot using Hierarchical Bayesian Networks (계층적 베이지안 네트워크를 사용한 서비스 로봇과 인간의 상호 주도방식 의사소통)

  • Song Youn-Suk;Hong Jin-Hyuk;Cho Sung-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.3
    • /
    • pp.344-355
    • /
    • 2006
  • In daily activities, the interaction between humans and robots is very important for supporting the user's task effectively. Dialogue may be useful to increase the flexibility and facility of interaction between them. Traditional studies of robots have only dealt with simple queries like commands for interaction, but in real conversation it is more complex and various for using many ways of expression, so people can often omit some words relying on the background knowledge or the context of the discourse. Since the same queries can have various meaning by this reason, it is needed to manage this situation. In this paper we propose a method that uses hierarchical bayesian networks to implement mixed-initiative interaction for managing vagueness of conversation in the service robot. We have verified the usefulness of the proposed method through the simulation of the service robot and usability test.

Energy-Efficient DNN Processor on Embedded Systems for Spontaneous Human-Robot Interaction

  • Kim, Changhyeon;Yoo, Hoi-Jun
    • Journal of Semiconductor Engineering
    • /
    • v.2 no.2
    • /
    • pp.130-135
    • /
    • 2021
  • Recently, deep neural networks (DNNs) are actively used for action control so that an autonomous system, such as the robot, can perform human-like behaviors and operations. Unlike recognition tasks, the real-time operation is essential in action control, and it is too slow to use remote learning on a server communicating through a network. New learning techniques, such as reinforcement learning (RL), are needed to determine and select the correct robot behavior locally. In this paper, we propose an energy-efficient DNN processor with a LUT-based processing engine and near-zero skipper. A CNN-based facial emotion recognition and an RNN-based emotional dialogue generation model is integrated for natural HRI system and tested with the proposed processor. It supports 1b to 16b variable weight bit precision with and 57.6% and 28.5% lower energy consumption than conventional MAC arithmetic units for 1b and 16b weight precision. Also, the near-zero skipper reduces 36% of MAC operation and consumes 28% lower energy consumption for facial emotion recognition tasks. Implemented in 65nm CMOS process, the proposed processor occupies 1784×1784 um2 areas and dissipates 0.28 mW and 34.4 mW at 1fps and 30fps facial emotion recognition tasks.