• Title/Summary/Keyword: Robot Interaction

Search Result 481, Processing Time 0.031 seconds

Qualitative Exploration on Children's Interactions in Telepresence Robot Assisted Language Learning (원격로봇 보조 언어교육의 아동 상호작용 질적 탐색)

  • Shin, Kyoung Wan Cathy;Han, Jeong-Hye
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.3
    • /
    • pp.177-184
    • /
    • 2017
  • The purpose of this study was to explore children and robot interaction in distant language learning environments using three different video-conferencing technologies-two traditional screen-based videoconference technologies and a telepresence robot. One American and six Korean elementary school students participated in our case study. We relied on narratives of one-on-one interviews and observation of nonverbal cues in robot assisted language learning. Our findings suggest that participants responded more positively to interactions via a telepresence robot than to two screen-based video-conferencings, with many citing a stronger sense of immediacy during robot-mediated communications.

Design and implement of the Educational Humanoid Robot D2 for Emotional Interaction System (감성 상호작용을 갖는 교육용 휴머노이드 로봇 D2 개발)

  • Kim, Do-Woo;Chung, Ki-Chull;Park, Won-Sung
    • Proceedings of the KIEE Conference
    • /
    • 2007.07a
    • /
    • pp.1777-1778
    • /
    • 2007
  • In this paper, We design and implement a humanoid robot, With Educational purpose, which can collaborate and communicate with human. We present an affective human-robot communication system for a humanoid robot, D2, which we designed to communicate with a human through dialogue. D2 communicates with humans by understanding and expressing emotion using facial expressions, voice, gestures and posture. Interaction between a human and a robot is made possible through our affective communication framework. The framework enables a robot to catch the emotional status of the user and to respond appropriately. As a result, the robot can engage in a natural dialogue with a human. According to the aim to be interacted with a human for voice, gestures and posture, the developed Educational humanoid robot consists of upper body, two arms, wheeled mobile platform and control hardware including vision and speech capability and various control boards such as motion control boards, signal processing board proceeding several types of sensors. Using the Educational humanoid robot D2, we have presented the successful demonstrations which consist of manipulation task with two arms, tracking objects using the vision system, and communication with human by the emotional interface, the synthesized speeches, and the recognition of speech commands.

  • PDF

Effects of LED on Emotion-Like Feedback of a Single-Eyed Spherical Robot

  • Onchi, Eiji;Cornet, Natanya;Lee, SeungHee
    • Science of Emotion and Sensibility
    • /
    • v.24 no.3
    • /
    • pp.115-124
    • /
    • 2021
  • Non-verbal communication is important in human interaction. It provides a layer of information that complements the message being transmitted. This type of information is not limited to human speakers. In human-robot communication, increasing the animacy of the robotic agent-by using non-verbal cues-can aid the expression of abstract concepts such as emotions. Considering the physical limitations of artificial agents, robots can use light and movement to express equivalent emotional feedback. This study analyzes the effects of LED and motion animation of a spherical robot on the emotion being expressed by the robot. A within-subjects experiment was conducted at the University of Tsukuba where participants were asked to rate 28 video samples of a robot interacting with a person. The robot displayed different motions with and without light animations. The results indicated that adding LED animations changes the emotional impression of the robot for valence, arousal, and dominance dimensions. Furthermore, people associated various situations according to the robot's behavior. These stimuli can be used to modulate the intensity of the emotion being expressed and enhance the interaction experience. This paper facilitates the possibility of designing more affective robots in the future, using simple feedback.

Performance Evaluation of Human Robot Interaction Components in Real Environments (실 환경에서의 인간로봇상호작용 컴포넌트의 성능평가)

  • Kim, Do-Hyung;Kim, Hye-Jin;Bae, Kyung-Sook;Yun, Woo-Han;Ban, Kyu-Dae;Park, Beom-Chul;Yoon, Ho-Sub
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.165-175
    • /
    • 2008
  • For an advanced intelligent service, the need of HRI technology has recently been increasing and the technology has been also improved. However, HRI components have been evaluated under stable and controlled laboratory environments and there are no evaluation results of performance in real environments. Therefore, robot service providers and users have not been getting sufficient information on the level of current HRI technology. In this paper, we provide the evaluation results of the performance of the HRI components on the robot platforms providing actual services in pilot service sites. For the evaluation, we select face detection component, speaker gender classification component and sound localization component as representative HRI components closing to the commercialization. The goal of this paper is to provide valuable information and reference performance on appling the HRI components to real robot environments.

  • PDF

Spatiotemporal Grounding for a Language Based Cognitive System (언이기반의 인지시스템을 위한 시공간적 기초화)

  • Ahn, Hyun-Sik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.1
    • /
    • pp.111-119
    • /
    • 2009
  • For daily life interaction with human, robots need the capability of encoding and storing cognitive information and retrieving it contextually. In this paper, spatiotemporal grounding of cognitive information for a language based cognitive system is presented. The cognitive information of the event occurred at a robot is described with a sentence, stored in a memory, and retrieved contextually. Each sentence is parsed, discriminated with the functional type of it, and analyzed with argument structure for connecting to cognitive information. With the proposed grounding, the cognitive information is encoded to sentence form and stored in sentence memory with object descriptor. Sentences are retrieved for answering questions of human by searching temporal information from the sentence memory and doing spatial reasoning in schematic imagery. An experiment shows the feasibility and efficiency of the spatiotemporal grounding for advanced service robot.

Robot-Human Task Sharing System for Assembly Process (조립 공정을 위한 로봇-사람 간 작업 공유 시스템)

  • Minwoo Na;Tae Hwa Hong;Junwan Yun;Jae-Bok Song
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.4
    • /
    • pp.419-426
    • /
    • 2023
  • Assembly tasks are difficult to fully automate due to uncertain errors occurring in unstructured environments. When assembling parts such as electrical connectors, advances in grasping and assembling technology have made it possible for the robot to assemble the connectors without the aid of humans. However, some parts with tight assembly tolerances should be assembled by humans. Therefore, task sharing with human-robot interaction is emerging as an alternative. The goal of this concept is to achieve shared autonomy, which reduces the efforts of humans when carrying out repetitive tasks. In this study, a task-sharing robotic system for assembly process has been proposed to achieve shared autonomy. This system consists of two parts, one for robotic grasping and assembly, and the other for monitoring the process for robot-human task sharing. Experimental results show that robots and humans share tasks efficiently while performing assembly tasks successfully.

Is Robot Alive? : Young Children's Perception of a Teacher Assistant Robot in a Classroom (로봇은 살아 있을까? : 우리 반 교사보조로봇에 대한 유아의 인식)

  • Hyun, Eun-Ja;Son, Soo-Ryun
    • Korean Journal of Child Studies
    • /
    • v.32 no.4
    • /
    • pp.1-14
    • /
    • 2011
  • The purpose of this study was to investigate young children's perceptions of a teacher assistant robot, IrobiQ. in a kindergarten classroom. The subjects of this study were 23 6-year-olds attending to G kindergarten located in E city, Korea, where the teacher assistant robot had been in operation since Oct. 2008. Each child responded to questions assessing the child's perceptions of IrobiQ's identity regarding four domains : it's biological, intellectual, emotional and social identity. Some questions asked the child to affirm or deny some characteristics pertaining to the robot and the other questions asked the reasons for the answer given. The results indicated that while majority of children considered an IrobiQ not as a biological entity, but as a machine, they thought it could have an emotion and be their playmate. The implications of these results are two folds : firstly, they force us to reconsider the traditional ontological categories regarding intelligent service robots to understand human-robot interaction and secondly, they open up an ecological perspective on the design of teacher assistant robots for use with young children in early childhood education settings.

Sound-based Emotion Estimation and Growing HRI System for an Edutainment Robot (에듀테인먼트 로봇을 위한 소리기반 사용자 감성추정과 성장형 감성 HRI시스템)

  • Kim, Jong-Cheol;Park, Kui-Hong
    • The Journal of Korea Robotics Society
    • /
    • v.5 no.1
    • /
    • pp.7-13
    • /
    • 2010
  • This paper presents the sound-based emotion estimation method and the growing HRI (human-robot interaction) system for a Mon-E robot. The method of emotion estimation uses the musical element based on the law of harmony and counterpoint. The emotion is estimated from sound using the information of musical elements which include chord, tempo, volume, harmonic and compass. In this paper, the estimated emotions display the standard 12 emotions including Eckman's 6 emotions (anger, disgust, fear, happiness, sadness, surprise) and the opposite 6 emotions (calmness, love, confidence, unhappiness, gladness, comfortableness) of those. The growing HRI system analyzes sensing information, estimated emotion and service log in an edutainment robot. So, it commands the behavior of the robot. The growing HRI system consists of the emotion client and the emotion server. The emotion client estimates the emotion from sound. This client not only transmits the estimated emotion and sensing information to the emotion server but also delivers response coming from the emotion server to the main program of the robot. The emotion server not only updates the rule table of HRI using information transmitted from the emotion client and but also transmits the response of the HRI to the emotion client. The proposed system was applied to a Mon-E robot and can supply friendly HRI service to users.