• 제목/요약/키워드: Human-robot interaction

검색결과 342건 처리시간 0.031초

지능형 로봇 구동을 위한 제스처 인식 기술 동향 (Survey: Gesture Recognition Techniques for Intelligent Robot)

  • 오재용;이칠우
    • 제어로봇시스템학회논문지
    • /
    • 제10권9호
    • /
    • pp.771-778
    • /
    • 2004
  • Recently, various applications of robot system become more popular in accordance with rapid development of computer hardware/software, artificial intelligence, and automatic control technology. Formerly robots mainly have been used in industrial field, however, nowadays it is said that the robot will do an important role in the home service application. To make the robot more useful, we require further researches on implementation of natural communication method between the human and the robot system, and autonomous behavior generation. The gesture recognition technique is one of the most convenient methods for natural human-robot interaction, so it is to be solved for implementation of intelligent robot system. In this paper, we describe the state-of-the-art of advanced gesture recognition technologies for intelligent robots according to three methods; sensor based method, feature based method, appearance based method, and 3D model based method. And we also discuss some problems and real applications in the research field.

사람과 로봇의 상호작용을 통한 청소 로봇 알고리즘 (Cleaning Robot Algorithm through Human-Robot Interaction)

  • 김승용;김태형
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제35권5호
    • /
    • pp.297-305
    • /
    • 2008
  • 청소 로봇은 지도 작성 및 위치 인식을 기준으로 청소 방법을 랜덤(random)방식과 매핑(mapping)방식으로 분류 할 수 있다. 랜덤방식은 지도를 작성하지 않아 가격경쟁력이 있지만 효율이 떨어진다. 반면, 매핑방식은 지도를 작성하므로 청소 효율이 높지만 상대적인 가격경쟁력이 떨어진다. 그러므로 랜덤방식과 매핑방식의 문제점들을 보안하기 위해 본 논문은 고가의 센서 정보를 사용하지 않고 사람이 청소 로봇에게 청소 공간에 대한 정보를 알려주며, 이 정보를 이용하여 기존의 청소 로봇보다 효율적이고 저렴한 청소 로봇을 제안한다. 또한 기존의 청소 로봇과 본 논문에서 제안한 청소 로봇과의 성능 비교를 통해서 본 논문에서 제안한 방식의 효율성을 보인다.

Soar (State Operator and Result)와 ROS 연계를 통해 거절가능 HRI 태스크의 휴머노이드로봇 구현 (Implementation of a Refusable Human-Robot Interaction Task with Humanoid Robot by Connecting Soar and ROS)

  • 당반치엔;트란트렁틴;팜쑤언쭝;길기종;신용빈;김종욱
    • 로봇학회논문지
    • /
    • 제12권1호
    • /
    • pp.55-64
    • /
    • 2017
  • This paper proposes combination of a cognitive agent architecture named Soar (State, operator, and result) and ROS (Robot Operating System), which can be a basic framework for a robot agent to interact and cope with its environment more intelligently and appropriately. The proposed Soar-ROS human-robot interaction (HRI) agent understands a set of human's commands by voice recognition and chooses to properly react to the command according to the symbol detected by image recognition, implemented on a humanoid robot. The robotic agent is allowed to refuse to follow an inappropriate command like "go" after it has seen the symbol 'X' which represents that an abnormal or immoral situation has occurred. This simple but meaningful HRI task is successfully experimented on the proposed Soar-ROS platform with a small humanoid robot, which implies that extending the present hybrid platform to artificial moral agent is possible.

감성 상호작용을 갖는 교육용 휴머노이드 로봇 D2 개발 (Design and implement of the Educational Humanoid Robot D2 for Emotional Interaction System)

  • 김도우;정기철;박원성
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2007년도 제38회 하계학술대회
    • /
    • pp.1777-1778
    • /
    • 2007
  • In this paper, We design and implement a humanoid robot, With Educational purpose, which can collaborate and communicate with human. We present an affective human-robot communication system for a humanoid robot, D2, which we designed to communicate with a human through dialogue. D2 communicates with humans by understanding and expressing emotion using facial expressions, voice, gestures and posture. Interaction between a human and a robot is made possible through our affective communication framework. The framework enables a robot to catch the emotional status of the user and to respond appropriately. As a result, the robot can engage in a natural dialogue with a human. According to the aim to be interacted with a human for voice, gestures and posture, the developed Educational humanoid robot consists of upper body, two arms, wheeled mobile platform and control hardware including vision and speech capability and various control boards such as motion control boards, signal processing board proceeding several types of sensors. Using the Educational humanoid robot D2, we have presented the successful demonstrations which consist of manipulation task with two arms, tracking objects using the vision system, and communication with human by the emotional interface, the synthesized speeches, and the recognition of speech commands.

  • PDF

Human-Robot Interaction in Real Environments by Audio-Visual Integration

  • Kim, Hyun-Don;Choi, Jong-Suk;Kim, Mun-Sang
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권1호
    • /
    • pp.61-69
    • /
    • 2007
  • In this paper, we developed not only a reliable sound localization system including a VAD(Voice Activity Detection) component using three microphones but also a face tracking system using a vision camera. Moreover, we proposed a way to integrate three systems in the human-robot interaction to compensate errors in the localization of a speaker and to reject unnecessary speech or noise signals entering from undesired directions effectively. For the purpose of verifying our system's performances, we installed the proposed audio-visual system in a prototype robot, called IROBAA(Intelligent ROBot for Active Audition), and demonstrated how to integrate the audio-visual system.

인간의 비언어적 행동 특징을 이용한 다중 사용자의 상호작용 의도 분석 (Interaction Intent Analysis of Multiple Persons using Nonverbal Behavior Features)

  • 윤상석;김문상;최문택;송재복
    • 제어로봇시스템학회논문지
    • /
    • 제19권8호
    • /
    • pp.738-744
    • /
    • 2013
  • According to the cognitive science research, the interaction intent of humans can be estimated through an analysis of the representing behaviors. This paper proposes a novel methodology for reliable intention analysis of humans by applying this approach. To identify the intention, 8 behavioral features are extracted from the 4 characteristics in human-human interaction and we outline a set of core components for nonverbal behavior of humans. These nonverbal behaviors are associated with various recognition modules including multimodal sensors which have each modality with localizing sound source of the speaker in the audition part, recognizing frontal face and facial expression in the vision part, and estimating human trajectories, body pose and leaning, and hand gesture in the spatial part. As a post-processing step, temporal confidential reasoning is utilized to improve the recognition performance and integrated human model is utilized to quantitatively classify the intention from multi-dimensional cues by applying the weight factor. Thus, interactive robots can make informed engagement decision to effectively interact with multiple persons. Experimental results show that the proposed scheme works successfully between human users and a robot in human-robot interaction.

Agent Mobility in Human Robot Interaction

  • Nguyen, To Dong;Oh, Sang-Rok;You, Bum-Jae
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2005년도 제36회 하계학술대회 논문집 D
    • /
    • pp.2771-2773
    • /
    • 2005
  • In network human-robot interaction, human can access services of a robot system through the network The communication is done by interacting with the distributed sensors via voice, gestures or by using user network access device such as computer, PDA. The service organization and exploration is very important for this distributed system. In this paper we propose a new agent-based framework to integrate partners of this distributed system together and help users to explore the service effectively without complicated configuration. Our system consists of several robots. users and distributed sensors. These partners are connected in a decentralized but centralized control system using agent-based technology. Several experiments are conducted successfully using our framework The experiments show that this framework is good in term of increasing the availability of the system, reducing the time users and robots needs to connect to the network at the same time. The framework also provides some coordination methods for the human robot interaction system.

  • PDF

Brain-Computer Interface 기반 인간-로봇상호작용 플랫폼 (A Brain-Computer Interface Based Human-Robot Interaction Platform)

  • 윤중선
    • 한국산학기술학회논문지
    • /
    • 제16권11호
    • /
    • pp.7508-7512
    • /
    • 2015
  • 뇌파로 의도를 접속하여 기계를 작동하는 뇌-기기 접속(Brain-Computer Interface, BCI) 기반의 인간-로봇상호작용(Human-Robot Interaction, HRI) 플랫폼을 제안한다. 사람의 뇌파로 의도를 포착하고 포착된 뇌파 신호에서 의도를 추출하거나 연관시키고 추출된 의도로 기기를 작동하게 하는 포착, 처리, 실행을 수행하는 플랫폼의 설계, 운용 및 구현 과정을 소개한다. 제안된 플랫폼의 구현 사례로 처리기에 구현된 상호작용 게임과 처리기를 통한 외부 장치 제어가 기술되었다. BCI 기반 플랫폼의 의도와 감지 사이의 신뢰성을 확보하기 위한 다양한 시도들을 소개한다. 제안된 플랫폼과 구현 사례는 BCI 기반의 새로운 기기 제어 작동 방식의 실현으로 확장될 것으로 기대된다.