• 제목/요약/키워드: human-robot-interaction

검색결과 343건 처리시간 0.024초

지능형 로봇을 위한 인간-컴퓨터 상호작용(HCI) 연구동향 (Human-Computer Interaction Survey for Intelligent Robot)

  • 홍석주;이칠우
    • 한국콘텐츠학회:학술대회논문집
    • /
    • 한국콘텐츠학회 2006년도 추계 종합학술대회 논문집
    • /
    • pp.507-511
    • /
    • 2006
  • 지능형 로봇이란 인간과 비슷하게 시각, 청각 등의 감각기관을 기반으로 자율적으로 판단하고 행동하는 독립적 자율구동 시스템을 말한다. 인간은 언어 이외에도 제스처와 같은 비언어적 수단을 이용하여 의사소통을 하며, 이러한 비언어적 의사소통 수단을 로봇이 이해한다면, 로봇은 인간과 보다 친숙한 대상이 될 수 있을 것이다. 이러한 요구에 의해 얼굴인식, 제스처 인식을 비롯한 HCI(Human-Computer Interaction) 기술들이 활발하게 연구되고 있지만 아직 해결해야 할 문제점이 많은 실정이다. 본 논문에서는 지능형 로봇을 위한 기반 기술 중 인간과의 가장 자연스러운 의사소통 방법의 하나인 제스처 인식 기술에 대하여, 최근 연구 성과를 중심으로 요소 기술의 중요 내용과 응용 사례를 소개한다.

  • PDF

얼굴로봇 Buddy의 기능 및 구동 메커니즘 (Functions and Driving Mechanisms for Face Robot Buddy)

  • 오경균;장명수;김승종;박신석
    • 로봇학회논문지
    • /
    • 제3권4호
    • /
    • pp.270-277
    • /
    • 2008
  • The development of a face robot basically targets very natural human-robot interaction (HRI), especially emotional interaction. So does a face robot introduced in this paper, named Buddy. Since Buddy was developed for a mobile service robot, it doesn't have a living-being like face such as human's or animal's, but a typically robot-like face with hard skin, which maybe suitable for mass production. Besides, its structure and mechanism should be simple and its production cost also should be low enough. This paper introduces the mechanisms and functions of mobile face robot named Buddy which can take on natural and precise facial expressions and make dynamic gestures driven by one laptop PC. Buddy also can perform lip-sync, eye-contact, face-tracking for lifelike interaction. By adopting a customized emotional reaction decision model, Buddy can create own personality, emotion and motive using various sensor data input. Based on this model, Buddy can interact probably with users and perform real-time learning using personality factors. The interaction performance of Buddy is successfully demonstrated by experiments and simulations.

  • PDF

조립 공정을 위한 로봇-사람 간 작업 공유 시스템 (Robot-Human Task Sharing System for Assembly Process)

  • 나민우;홍태화;윤준완;송재복
    • 로봇학회논문지
    • /
    • 제18권4호
    • /
    • pp.419-426
    • /
    • 2023
  • Assembly tasks are difficult to fully automate due to uncertain errors occurring in unstructured environments. When assembling parts such as electrical connectors, advances in grasping and assembling technology have made it possible for the robot to assemble the connectors without the aid of humans. However, some parts with tight assembly tolerances should be assembled by humans. Therefore, task sharing with human-robot interaction is emerging as an alternative. The goal of this concept is to achieve shared autonomy, which reduces the efforts of humans when carrying out repetitive tasks. In this study, a task-sharing robotic system for assembly process has been proposed to achieve shared autonomy. This system consists of two parts, one for robotic grasping and assembly, and the other for monitoring the process for robot-human task sharing. Experimental results show that robots and humans share tasks efficiently while performing assembly tasks successfully.

A Human Robot Interactive System 'RoJi '

  • Yoon, Joongsun
    • Journal of Mechanical Science and Technology
    • /
    • 제18권11호
    • /
    • pp.1900-1908
    • /
    • 2004
  • A human-friendly interactive system that is based on the harmonious symbiotic coexistence of human and robots is explored. Based on interactive technology paradigm, a robotic cane is proposed for blind or visually impaired travelers to navigate safely and quickly through obstacles and other hazards faced by blind pedestrians. Robotic aids, such as robotic canes, require cooperation between human and robots. Various methods for implementing the appropriate cooperative recognition, planning, and acting, have been investigated. The issues discussed include the interaction between humans and robots, design issues of an interactive robotic cane, and behavior arbitration methodologies for navigation planning.

서비스 로봇을 위한 감성인터페이스 기술 (Emotional Interface Technologies for Service Robot)

  • 양현승;서용호;정일웅;한태우;노동현
    • 로봇학회논문지
    • /
    • 제1권1호
    • /
    • pp.58-65
    • /
    • 2006
  • The emotional interface is essential technology for the robot to provide the proper service to the user. In this research, we developed emotional components for the service robot such as a neural network based facial expression recognizer, emotion expression technologies based on 3D graphical face expression and joints movements, considering a user's reaction, behavior selection technology for emotion expression. We used our humanoid robots, AMI and AMIET as the test-beds of our emotional interface. We researched on the emotional interaction between a service robot and a user by integrating the developed technologies. Emotional interface technology for the service robot, enhance the performance of friendly interaction to the service robot, to increase the diversity of the service and the value-added of the robot for human. and it elevates the market growth and also contribute to the popularization of the robot. The emotional interface technology can enhance the performance of friendly interaction of the service robot. This technology can also increase the diversity of the service and the value-added of the robot for human. and it can elevate the market growth and also contribute to the popularization of the robot.

  • PDF

HRI를 위한 사람의 내적 요소 기반의 인공 정서 표현 시스템 (Human emotional elements and external stimulus information-based Artificial Emotion Expression System for HRI)

  • 오승원;한민수
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2008년도 학술대회 1부
    • /
    • pp.7-12
    • /
    • 2008
  • 사람과 로봇의 인터랙션에 있어 정서의 역할은 중요하다. 그러므로 사람과 유사한 정서 메커니즘이 로봇에게 필요하다. 본 논문에서는 사람의 정서에 대한 심리적 연구를 바탕으로 사람의 내적 요소에 기반한 새로운 인공 정서 표현 시스템을 제안한다. 제안된 시스템은 external stimulus, emotion, mood, personality, tendency, machine rhythm 의 6가지 정보 요소를 활용하며, 각 요소들은 그들의 특성에 따라 결과적으로 emotion 표현 패턴의 변화에 영향을 준다. 그 결과 통일한 외부 자극들은 내적 상태에 따라 emotion의 변화를 만들어 낸다. 제안된 시스템은 사람과 로봇의 자연스러운 인터랙션을 유지하고, 친밀한 관계를 형성할 수 있도록 도움을 줄 것이다.

  • PDF

Effects of LED on Emotion-Like Feedback of a Single-Eyed Spherical Robot

  • Onchi, Eiji;Cornet, Natanya;Lee, SeungHee
    • 감성과학
    • /
    • 제24권3호
    • /
    • pp.115-124
    • /
    • 2021
  • Non-verbal communication is important in human interaction. It provides a layer of information that complements the message being transmitted. This type of information is not limited to human speakers. In human-robot communication, increasing the animacy of the robotic agent-by using non-verbal cues-can aid the expression of abstract concepts such as emotions. Considering the physical limitations of artificial agents, robots can use light and movement to express equivalent emotional feedback. This study analyzes the effects of LED and motion animation of a spherical robot on the emotion being expressed by the robot. A within-subjects experiment was conducted at the University of Tsukuba where participants were asked to rate 28 video samples of a robot interacting with a person. The robot displayed different motions with and without light animations. The results indicated that adding LED animations changes the emotional impression of the robot for valence, arousal, and dominance dimensions. Furthermore, people associated various situations according to the robot's behavior. These stimuli can be used to modulate the intensity of the emotion being expressed and enhance the interaction experience. This paper facilitates the possibility of designing more affective robots in the future, using simple feedback.

로봇 캐릭터와의 상호작용에서 사용자의 시선 배분 분석 (Analysis of User's Eye Gaze Distribution while Interacting with a Robotic Character)

  • 장세윤;조혜경
    • 로봇학회논문지
    • /
    • 제14권1호
    • /
    • pp.74-79
    • /
    • 2019
  • In this paper, we develop a virtual experimental environment to investigate users' eye gaze in human-robot social interaction, and verify it's potential for further studies. The system consists of a 3D robot character capable of hosting simple interactions with a user, and a gaze processing module recording which body part of the robot character, such as eyes, mouth or arms, the user is looking at, regardless of whether the robot is stationary or moving. To verify that the results acquired on this virtual environment are aligned with those of physically existing robots, we performed robot-guided quiz sessions with 120 participants and compared the participants' gaze patterns with those in previous works. The results included the followings. First, when interacting with the robot character, the user's gaze pattern showed similar statistics as the conversations between humans. Second, an animated mouth of the robot character received longer attention compared to the stationary one. Third, nonverbal interactions such as leakage cues were also effective in the interaction with the robot character, and the correct answer ratios of the cued groups were higher. Finally, gender differences in the users' gaze were observed, especially in the frequency of the mutual gaze.

로봇은 살아 있을까? : 우리 반 교사보조로봇에 대한 유아의 인식 (Is Robot Alive? : Young Children's Perception of a Teacher Assistant Robot in a Classroom)

  • 현은자;손수련
    • 아동학회지
    • /
    • 제32권4호
    • /
    • pp.1-14
    • /
    • 2011
  • The purpose of this study was to investigate young children's perceptions of a teacher assistant robot, IrobiQ. in a kindergarten classroom. The subjects of this study were 23 6-year-olds attending to G kindergarten located in E city, Korea, where the teacher assistant robot had been in operation since Oct. 2008. Each child responded to questions assessing the child's perceptions of IrobiQ's identity regarding four domains : it's biological, intellectual, emotional and social identity. Some questions asked the child to affirm or deny some characteristics pertaining to the robot and the other questions asked the reasons for the answer given. The results indicated that while majority of children considered an IrobiQ not as a biological entity, but as a machine, they thought it could have an emotion and be their playmate. The implications of these results are two folds : firstly, they force us to reconsider the traditional ontological categories regarding intelligent service robots to understand human-robot interaction and secondly, they open up an ecological perspective on the design of teacher assistant robots for use with young children in early childhood education settings.

실외에서 로봇의 인간 탐지 및 행위 학습을 위한 멀티모달센서 시스템 및 데이터베이스 구축 (Multi-modal Sensor System and Database for Human Detection and Activity Learning of Robot in Outdoor)

  • 엄태영;박정우;이종득;배기덕;최영호
    • 한국멀티미디어학회논문지
    • /
    • 제21권12호
    • /
    • pp.1459-1466
    • /
    • 2018
  • Robots which detect human and recognize action are important factors for human interaction, and many researches have been conducted. Recently, deep learning technology has developed and learning based robot's technology is a major research area. These studies require a database to learn and evaluate for intelligent human perception. In this paper, we propose a multi-modal sensor-based image database condition considering the security task by analyzing the image database to detect the person in the outdoor environment and to recognize the behavior during the running of the robot.