• Title/Summary/Keyword: Human-Robot Interactions

Search Result 38, Processing Time 0.032 seconds

A Portable Mediate Interface 'Handybot' for the Rich Human-Robot Interaction (인관과 로봇의 다양한 상호작용을 위한 휴대 매개인터페이스 ‘핸디밧’)

  • Hwang, Jung-Hoon;Kwon, Dong-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.735-742
    • /
    • 2007
  • The importance of the interaction capability of a robot increases as the application of a robot is extended to a human's daily life. In this paper, a portable mediate interface Handybot is developed with various interaction channels to be used with an intelligent home service robot. The Handybot has a task-oriented channel of an icon language as well as a verbal interface. It also has an emotional interaction channel that recognizes a user's emotional state from facial expression and speech, transmits that state to the robot, and expresses the robot's emotional state to the user. It is expected that the Handybot will reduce spatial problems that may exist in human-robot interactions, propose a new interaction method, and help creating rich and continuous interactions between human users and robots.

Life-like Facial Expression of Mascot-Type Robot Based on Emotional Boundaries (감정 경계를 이용한 로봇의 생동감 있는 얼굴 표정 구현)

  • Park, Jeong-Woo;Kim, Woo-Hyun;Lee, Won-Hyong;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.281-288
    • /
    • 2009
  • Nowadays, many robots have evolved to imitate human social skills such that sociable interaction with humans is possible. Socially interactive robots require abilities different from that of conventional robots. For instance, human-robot interactions are accompanied by emotion similar to human-human interactions. Robot emotional expression is thus very important for humans. This is particularly true for facial expressions, which play an important role in communication amongst other non-verbal forms. In this paper, we introduce a method of creating lifelike facial expressions in robots using variation of affect values which consist of the robot's emotions based on emotional boundaries. The proposed method was examined by experiments of two facial robot simulators.

  • PDF

Mobile Robot Control for Human Following in Intelligent Space

  • Kazuyuki Morioka;Lee, Joo-Ho;Zhimin Lin;Hideki Hashimoto
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.25.1-25
    • /
    • 2001
  • Intelligent Space is a space where many sensors and intelligent devices are distributed. Mobile robots exist in this space as physical agents, which provide human with services. To realize this, human and mobile robots have to approach each other as much as possible. Moreover, it is necessary for them to perform interactions naturally. Thus, it is desirable for a mobile robot to carry out human-affnitive movement. In this research, a mobile robot is controlled by the Intelligent Space through its resources. The mobile robot is controlled to follow walking human as stably and precisely as possible.

  • PDF

An Interactive Robotic Cane

  • Yoon, Joongsun
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.5 no.1
    • /
    • pp.5-12
    • /
    • 2004
  • A human-friendly interactive system that is based on the harmonious symbiotic coexistence of human and robots is explored. Based on this interactive technology paradigm, a robotic cane is proposed for blind or visually impaired travelers to navigate safely and quickly through obstacles and other hazards faced by blind pedestrians. The proposed robotic cane, "RoJi,” consists of a long handle with a button-operated interface and a sensor head unit that is attached at the distal end of the handle. A series of sensors, mounted on the sensor head unit, detect obstacles and steer the device around them. The user feels the steering command as a very noticeable physical force through the handle and is able to follow the path of the robotic cane easily and without any conscious effort. The issues discussed include methodologies for human-robot interactions, design issues of an interactive robotic cane, and hardware requirements for efficient human-robot interactions.ions.

Human Centered Robot for Mutual Interaction in Intelligent Space

  • Jin Tae-Seok;Hashimoto Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.3
    • /
    • pp.246-252
    • /
    • 2005
  • Intelligent Space is a space where many sensors and intelligent devices are distributed. Mobile robots exist in this space as physical agents, which provide human with services. To realize this, human and mobile robots have to approach each other as much as possible. Moreover, it is necessary for them to perform interactions naturally. It is desirable for a mobile robot to carry out human affinitive movement. In this research, a mobile robot is controlled by the Intelligent Space through its resources. The mobile robot is controlled to follow walking human as stably and precisely as possible. In order to follow a human, control law is derived from the assumption that a human and a mobile robot are connected with a virtual spring model. Input velocity to a mobile robot is generated on the basis of the elastic force from the virtual spring in this model. And its performance is verified by the computer simulation and the experiment.

Analysis of User's Eye Gaze Distribution while Interacting with a Robotic Character (로봇 캐릭터와의 상호작용에서 사용자의 시선 배분 분석)

  • Jang, Seyun;Cho, Hye-Kyung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.74-79
    • /
    • 2019
  • In this paper, we develop a virtual experimental environment to investigate users' eye gaze in human-robot social interaction, and verify it's potential for further studies. The system consists of a 3D robot character capable of hosting simple interactions with a user, and a gaze processing module recording which body part of the robot character, such as eyes, mouth or arms, the user is looking at, regardless of whether the robot is stationary or moving. To verify that the results acquired on this virtual environment are aligned with those of physically existing robots, we performed robot-guided quiz sessions with 120 participants and compared the participants' gaze patterns with those in previous works. The results included the followings. First, when interacting with the robot character, the user's gaze pattern showed similar statistics as the conversations between humans. Second, an animated mouth of the robot character received longer attention compared to the stationary one. Third, nonverbal interactions such as leakage cues were also effective in the interaction with the robot character, and the correct answer ratios of the cued groups were higher. Finally, gender differences in the users' gaze were observed, especially in the frequency of the mutual gaze.

Intelligent Emotional Interface for Personal Robot and Its Application to a Humanoid Robot, AMIET

  • Seo, Yong-Ho;Jeong, Il-Woong;Jung, Hye-Won;Yang, Hyun-S.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1764-1768
    • /
    • 2004
  • In the near future, robots will be used for the personal use. To provide useful services to humans, it will be necessary for robots to understand human intentions. Consequently, the development of emotional interfaces for robots is an important expansion of human-robot interactions. We designed and developed an intelligent emotional interface for the robot, and applied the interfaces to our humanoid robot, AMIET. Subsequent human-robot interaction demonstrated that our intelligent emotional interface is very intuitive and friendly

  • PDF

Recognizing User Engagement and Intentions based on the Annotations of an Interaction Video (상호작용 영상 주석 기반 사용자 참여도 및 의도 인식)

  • Jang, Minsu;Park, Cheonshu;Lee, Dae-Ha;Kim, Jaehong;Cho, Young-Jo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.6
    • /
    • pp.612-618
    • /
    • 2014
  • A pattern classifier-based approach for recognizing internal states of human participants in interactions is presented along with its experimental results. The approach includes a step for collecting video recordings of human-human interactions or humanrobot interactions and subsequently analyzing the videos based on human coded annotations. The annotation includes social signals directly observed in the video recordings and the internal states of human participants indirectly inferred from those observed social signals. Then, a pattern classifier is trained using the annotation data, and tested. In our experiments on human-robot interaction, 7 video recordings were collected and annotated with 20 social signals and 7 internal states. Several experiments were performed to obtain an 84.83% recall rate for interaction engagement, 93% for concentration intention, and 81% for task comprehension level using a C4.5 based decision tree classifier.

Probabilistic Neural Network Based Learning from Fuzzy Voice Commands for Controlling a Robot

  • Jayawardena, Chandimal;Watanabe, Keigo;Izumi, Kiyotaka
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.2011-2016
    • /
    • 2004
  • Study of human-robot communication is one of the most important research areas. Among various communication media, any useful law we find in voice communication in human-human interactions, is significant in human-robot interactions too. Control strategy of most of such systems available at present is on/off control. These robots activate a function if particular word or phrase associated with that function can be recognized in the user utterance. Recently, there have been some researches on controlling robots using information rich fuzzy commands such as "go little slowly". However, in those works, although the voice command interpretation has been considered, learning from such commands has not been treated. In this paper, learning from such information rich voice commands for controlling a robot is studied. New concepts of the coach-player model and the sub-coach are proposed and such concepts are also demonstrated for a PA-10 redundant manipulator.

  • PDF

When Robots Meet the Elderly: The Contexts of Interaction and the Role of Mediators (노인과 로봇은 어떻게 만나는가: 상호작용의 조건과 매개자의 역할)

  • Shin, Heesun;Jeon, Chihyung
    • Journal of Science and Technology Studies
    • /
    • v.18 no.2
    • /
    • pp.135-179
    • /
    • 2018
  • How do robots interact with the elderly? In this paper, we analyze the contexts of interaction between robots and the elderly and the role of mediators in initiating, facilitating, and maintaining the interaction. We do not attempt to evaluate the robot's performance or measure the impact of robots on the elderly. Instead, we focus on the circumstances and contexts within which a robot is situated as it interacts with the elderly. Our premise is that the success of human-robot interaction does not depend solely on the robot's technical capability, but also on the pre-arranged settings and local contingencies at the site of interaction. We select three television shows that feature robots for the elderly and one "dementia-prevention" robot in a regional healthcare center as our sites for observing robot-elderly interaction: "Grandma's Robot"(tvN), "Co-existence Experiment''(JTBC), "Future Diary"(MBC), and the Silbot class in Suwon. By analyzing verbal and non-verbal interactions between the elderly and the robots in these programs, we point out that in most cases the robots and the elderly do not meet one-to-one; the interaction is usually mediated by an actor who is not an old person. These mediators are not temporary or secondary components in the robot-elderly interaction; they play a key role in the relationship by arranging the first meeting, triggering initial interactions, and carefully observing unfolding interactions. At critical moments, the mediators prevent the interaction from falling apart by intervening verbally or physically. Based on our observation of the robot-elderly interaction, we argue that we can better understand and evaluate the human-robot interaction in general by paying attention to the existence and role of the mediators. We suggest that researchers in human-robot interaction should expand their analytical focus from one-to-one interactions between humans and robots to human-robot-human interactions in diverse real-world situations.