• 제목/요약/키워드: Human-Robot Interactions

검색결과 38건 처리시간 0.033초

인관과 로봇의 다양한 상호작용을 위한 휴대 매개인터페이스 ‘핸디밧’ (A Portable Mediate Interface 'Handybot' for the Rich Human-Robot Interaction)

  • 황정훈;권동수
    • 제어로봇시스템학회논문지
    • /
    • 제13권8호
    • /
    • pp.735-742
    • /
    • 2007
  • The importance of the interaction capability of a robot increases as the application of a robot is extended to a human's daily life. In this paper, a portable mediate interface Handybot is developed with various interaction channels to be used with an intelligent home service robot. The Handybot has a task-oriented channel of an icon language as well as a verbal interface. It also has an emotional interaction channel that recognizes a user's emotional state from facial expression and speech, transmits that state to the robot, and expresses the robot's emotional state to the user. It is expected that the Handybot will reduce spatial problems that may exist in human-robot interactions, propose a new interaction method, and help creating rich and continuous interactions between human users and robots.

감정 경계를 이용한 로봇의 생동감 있는 얼굴 표정 구현 (Life-like Facial Expression of Mascot-Type Robot Based on Emotional Boundaries)

  • 박정우;김우현;이원형;정명진
    • 로봇학회논문지
    • /
    • 제4권4호
    • /
    • pp.281-288
    • /
    • 2009
  • Nowadays, many robots have evolved to imitate human social skills such that sociable interaction with humans is possible. Socially interactive robots require abilities different from that of conventional robots. For instance, human-robot interactions are accompanied by emotion similar to human-human interactions. Robot emotional expression is thus very important for humans. This is particularly true for facial expressions, which play an important role in communication amongst other non-verbal forms. In this paper, we introduce a method of creating lifelike facial expressions in robots using variation of affect values which consist of the robot's emotions based on emotional boundaries. The proposed method was examined by experiments of two facial robot simulators.

  • PDF

Mobile Robot Control for Human Following in Intelligent Space

  • Kazuyuki Morioka;Lee, Joo-Ho;Zhimin Lin;Hideki Hashimoto
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.25.1-25
    • /
    • 2001
  • Intelligent Space is a space where many sensors and intelligent devices are distributed. Mobile robots exist in this space as physical agents, which provide human with services. To realize this, human and mobile robots have to approach each other as much as possible. Moreover, it is necessary for them to perform interactions naturally. Thus, it is desirable for a mobile robot to carry out human-affnitive movement. In this research, a mobile robot is controlled by the Intelligent Space through its resources. The mobile robot is controlled to follow walking human as stably and precisely as possible.

  • PDF

An Interactive Robotic Cane

  • Yoon, Joongsun
    • International Journal of Precision Engineering and Manufacturing
    • /
    • 제5권1호
    • /
    • pp.5-12
    • /
    • 2004
  • A human-friendly interactive system that is based on the harmonious symbiotic coexistence of human and robots is explored. Based on this interactive technology paradigm, a robotic cane is proposed for blind or visually impaired travelers to navigate safely and quickly through obstacles and other hazards faced by blind pedestrians. The proposed robotic cane, "RoJi,” consists of a long handle with a button-operated interface and a sensor head unit that is attached at the distal end of the handle. A series of sensors, mounted on the sensor head unit, detect obstacles and steer the device around them. The user feels the steering command as a very noticeable physical force through the handle and is able to follow the path of the robotic cane easily and without any conscious effort. The issues discussed include methodologies for human-robot interactions, design issues of an interactive robotic cane, and hardware requirements for efficient human-robot interactions.ions.

Human Centered Robot for Mutual Interaction in Intelligent Space

  • Jin Tae-Seok;Hashimoto Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제5권3호
    • /
    • pp.246-252
    • /
    • 2005
  • Intelligent Space is a space where many sensors and intelligent devices are distributed. Mobile robots exist in this space as physical agents, which provide human with services. To realize this, human and mobile robots have to approach each other as much as possible. Moreover, it is necessary for them to perform interactions naturally. It is desirable for a mobile robot to carry out human affinitive movement. In this research, a mobile robot is controlled by the Intelligent Space through its resources. The mobile robot is controlled to follow walking human as stably and precisely as possible. In order to follow a human, control law is derived from the assumption that a human and a mobile robot are connected with a virtual spring model. Input velocity to a mobile robot is generated on the basis of the elastic force from the virtual spring in this model. And its performance is verified by the computer simulation and the experiment.

로봇 캐릭터와의 상호작용에서 사용자의 시선 배분 분석 (Analysis of User's Eye Gaze Distribution while Interacting with a Robotic Character)

  • 장세윤;조혜경
    • 로봇학회논문지
    • /
    • 제14권1호
    • /
    • pp.74-79
    • /
    • 2019
  • In this paper, we develop a virtual experimental environment to investigate users' eye gaze in human-robot social interaction, and verify it's potential for further studies. The system consists of a 3D robot character capable of hosting simple interactions with a user, and a gaze processing module recording which body part of the robot character, such as eyes, mouth or arms, the user is looking at, regardless of whether the robot is stationary or moving. To verify that the results acquired on this virtual environment are aligned with those of physically existing robots, we performed robot-guided quiz sessions with 120 participants and compared the participants' gaze patterns with those in previous works. The results included the followings. First, when interacting with the robot character, the user's gaze pattern showed similar statistics as the conversations between humans. Second, an animated mouth of the robot character received longer attention compared to the stationary one. Third, nonverbal interactions such as leakage cues were also effective in the interaction with the robot character, and the correct answer ratios of the cued groups were higher. Finally, gender differences in the users' gaze were observed, especially in the frequency of the mutual gaze.

Intelligent Emotional Interface for Personal Robot and Its Application to a Humanoid Robot, AMIET

  • Seo, Yong-Ho;Jeong, Il-Woong;Jung, Hye-Won;Yang, Hyun-S.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1764-1768
    • /
    • 2004
  • In the near future, robots will be used for the personal use. To provide useful services to humans, it will be necessary for robots to understand human intentions. Consequently, the development of emotional interfaces for robots is an important expansion of human-robot interactions. We designed and developed an intelligent emotional interface for the robot, and applied the interfaces to our humanoid robot, AMIET. Subsequent human-robot interaction demonstrated that our intelligent emotional interface is very intuitive and friendly

  • PDF

상호작용 영상 주석 기반 사용자 참여도 및 의도 인식 (Recognizing User Engagement and Intentions based on the Annotations of an Interaction Video)

  • 장민수;박천수;이대하;김재홍;조영조
    • 제어로봇시스템학회논문지
    • /
    • 제20권6호
    • /
    • pp.612-618
    • /
    • 2014
  • A pattern classifier-based approach for recognizing internal states of human participants in interactions is presented along with its experimental results. The approach includes a step for collecting video recordings of human-human interactions or humanrobot interactions and subsequently analyzing the videos based on human coded annotations. The annotation includes social signals directly observed in the video recordings and the internal states of human participants indirectly inferred from those observed social signals. Then, a pattern classifier is trained using the annotation data, and tested. In our experiments on human-robot interaction, 7 video recordings were collected and annotated with 20 social signals and 7 internal states. Several experiments were performed to obtain an 84.83% recall rate for interaction engagement, 93% for concentration intention, and 81% for task comprehension level using a C4.5 based decision tree classifier.

Probabilistic Neural Network Based Learning from Fuzzy Voice Commands for Controlling a Robot

  • Jayawardena, Chandimal;Watanabe, Keigo;Izumi, Kiyotaka
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.2011-2016
    • /
    • 2004
  • Study of human-robot communication is one of the most important research areas. Among various communication media, any useful law we find in voice communication in human-human interactions, is significant in human-robot interactions too. Control strategy of most of such systems available at present is on/off control. These robots activate a function if particular word or phrase associated with that function can be recognized in the user utterance. Recently, there have been some researches on controlling robots using information rich fuzzy commands such as "go little slowly". However, in those works, although the voice command interpretation has been considered, learning from such commands has not been treated. In this paper, learning from such information rich voice commands for controlling a robot is studied. New concepts of the coach-player model and the sub-coach are proposed and such concepts are also demonstrated for a PA-10 redundant manipulator.

  • PDF

노인과 로봇은 어떻게 만나는가: 상호작용의 조건과 매개자의 역할 (When Robots Meet the Elderly: The Contexts of Interaction and the Role of Mediators)

  • 신희선;전치형
    • 과학기술학연구
    • /
    • 제18권2호
    • /
    • pp.135-179
    • /
    • 2018
  • 이 논문에서 우리는 노인과 로봇이 만나는 여러 프로그램을 검토하여 노인과 로봇의 상호작용을 가능하게 하는 조건과 그 사이를 매개하는 제삼자의 역할을 분석한다. 우리는 로봇의 성능을 평가하거나 로봇이 노인에게 미치는 영향을 측정하는 대신, 노인과 조우할 때 로봇이 어떤 위치, 환경, 맥락에 놓이는지에 주목한다. 이 논문은 인간-로봇 상호작용의 양상은 로봇의 성능뿐만 아니라, 사전에 마련된 환경과 우연적으로 발생하는 상황에 따라 크게 달라진다는 관점을 취한다. 이 논문에서 분석 대상으로 삼는 것은 <할매네 로봇>(tvN), <미래일기>(MBC), <공존실험, 로봇 인간 곁에 오다>(JTBC) 등 노인 로봇 실험을 실시한 텔레비전 프로그램과 치매 예방 로봇 '실벗'을 사용하는 교육 프로그램이다. 이 프로그램들에서 나타나는 노인과 로봇 사이의 언어적, 비언어적 소통을 분석하여 우리는 대부분의 노인-로봇 상호작용이 노인 한 명과 로봇 한 대 사이의 직접 상호작용이 아니라, 노인이 아닌 제삼의 행위자가 중요한 역할을 하는 매개된 상호작용이라는 점을 지적한다. 매개자는 노인-로봇 상호작용에서 일시적이고 부차적인 역할을 맡았다가 바로 사라지는 것이 아니라, 노인과 로봇의 관계를 시작하고 유지하는 핵심적인 요소로 작용한다. 매개자들은 노인과 로봇의 만남을 주선하거나 상호작용을 촉발하고, 그것을 관찰하면서 적절한 때에 언어적으로 개입하여 둘 사이의 관계를 중재하고, 때로는 물리적으로 로봇의 행동을 보조하여 노인-로봇 상호작용의 실패를 막아낸다. 우리는 매개자의 존재와 역할에 주목함으로써 노인-로봇 관계만이 아니라 일반적인 인간-로봇 관계를 더 잘 이해하고 평가할 수 있다고 주장한다. 인간-로봇의 일대일 관계가 아니라 인간-로봇-인간 사이의 다자 구도와 그를 둘러싼 맥락을 고려함으로써 기존의 공학적, 의학적, 사회과학적 접근을 보완하고 로봇의 개발, 활용, 평가에 대한 유용한 관점을 얻을 수 있을 것이다.