• 제목/요약/키워드: Human robot

검색결과 1,374건 처리시간 0.034초

인체 능력 향상을 위한 하지 외골격 시스템의 기술 동향 (Technical Trend of the Lower Limb Exoskeleton System for the Performance Enhancement)

  • 이희돈;한창수
    • 제어로봇시스템학회논문지
    • /
    • 제20권3호
    • /
    • pp.364-371
    • /
    • 2014
  • The purpose of this paper is to review recent developments in lower limb exoskeletons. The exoskeleton system is a human-robot cooperation system that enhances the performance of the wearer in various environments while the human operator is in charge of the position control, contextual perception, and motion signal generation through the robot's artificial intelligence. This system is in the form of a mechanical structure that is combined to the exterior of a human body to improve the muscular power of the wearer. This paper is followed by an overview of the development history of exoskeleton systems and their three main applications in military/industrial field, medical/rehabilitation field and social welfare field. Besides the key technologies in exoskeleton systems, the research is presented from several viewpoints of the exoskeleton mechanism, human-robot interface and human-robot cooperation control.

어린이를 위한 소셜 로봇의 심리운동 기반 놀이 활동 개발 (Psychomotorik-based Play Activities for Children by In-home Social Robot)

  • 김다영;최지환;김주현;김민규;정재희;서갑호;이원형
    • 로봇학회논문지
    • /
    • 제17권4호
    • /
    • pp.447-454
    • /
    • 2022
  • This paper presents the psychomotorik-based play activities executed by the social robot at home which helps children's social and emotional development. Based on the theory and practice of the psychomotorik therapy, the play activities were implemented in the close collaboration between psychmotorik experts, service designers and robotics engineers. The designed play activities are classified into four categories depending on the main areas of child development. The robotic system that can express verbal and nonverbal behaviors was developed in order to play games with children and but also to make children have continuous interest during the play activities with it. Finally, the psychomotorik-based play service scenario and interactive robot system were validated by the expert group from the domain of child psychotherapy. The evaluation results showed that the play service and the robot system were appropriately developed for children from the experts point of view.

Mobile Robot Control for Human Following in Intelligent Space

  • Kazuyuki Morioka;Lee, Joo-Ho;Zhimin Lin;Hideki Hashimoto
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.25.1-25
    • /
    • 2001
  • Intelligent Space is a space where many sensors and intelligent devices are distributed. Mobile robots exist in this space as physical agents, which provide human with services. To realize this, human and mobile robots have to approach each other as much as possible. Moreover, it is necessary for them to perform interactions naturally. Thus, it is desirable for a mobile robot to carry out human-affnitive movement. In this research, a mobile robot is controlled by the Intelligent Space through its resources. The mobile robot is controlled to follow walking human as stably and precisely as possible.

  • PDF

An analysis of the component of Human-Robot Interaction for Intelligent room

  • Park, Jong-Chan;Kwon, Dong-Soo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.2143-2147
    • /
    • 2005
  • Human-Robot interaction (HRI) has recently become one of the most important issues in the field of robotics. Understanding and predicting the intentions of human users is a major difficulty for robotic programs. In this paper we suggest an interaction method allows the robot to execute the human user's desires in an intelligent room-based domain, even when the user does not give a specific command for the action. To achieve this, we constructed a full system architecture of an intelligent room so that the following were present and sequentially interconnected: decision-making based on the Bayesian belief network, responding to human commands, and generating queries to remove ambiguities. The robot obtained all the necessary information from analyzing the user's condition and the environmental state of the room. This information is then used to evaluate the probabilities of the results coming from the output nodes of the Bayesian belief network, which is composed of the nodes that includes several states, and the causal relationships between them. Our study shows that the suggested system and proposed method would improve a robot's ability to understand human commands, intuit human desires, and predict human intentions resulting in a comfortable intelligent room for the human user.

  • PDF

Intelligent Emotional Interface for Personal Robot and Its Application to a Humanoid Robot, AMIET

  • Seo, Yong-Ho;Jeong, Il-Woong;Jung, Hye-Won;Yang, Hyun-S.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1764-1768
    • /
    • 2004
  • In the near future, robots will be used for the personal use. To provide useful services to humans, it will be necessary for robots to understand human intentions. Consequently, the development of emotional interfaces for robots is an important expansion of human-robot interactions. We designed and developed an intelligent emotional interface for the robot, and applied the interfaces to our humanoid robot, AMIET. Subsequent human-robot interaction demonstrated that our intelligent emotional interface is very intuitive and friendly

  • PDF

감정 경계를 이용한 로봇의 생동감 있는 얼굴 표정 구현 (Life-like Facial Expression of Mascot-Type Robot Based on Emotional Boundaries)

  • 박정우;김우현;이원형;정명진
    • 로봇학회논문지
    • /
    • 제4권4호
    • /
    • pp.281-288
    • /
    • 2009
  • Nowadays, many robots have evolved to imitate human social skills such that sociable interaction with humans is possible. Socially interactive robots require abilities different from that of conventional robots. For instance, human-robot interactions are accompanied by emotion similar to human-human interactions. Robot emotional expression is thus very important for humans. This is particularly true for facial expressions, which play an important role in communication amongst other non-verbal forms. In this paper, we introduce a method of creating lifelike facial expressions in robots using variation of affect values which consist of the robot's emotions based on emotional boundaries. The proposed method was examined by experiments of two facial robot simulators.

  • PDF

지능형 로봇 구동을 위한 제스처 인식 기술 동향 (Survey: Gesture Recognition Techniques for Intelligent Robot)

  • 오재용;이칠우
    • 제어로봇시스템학회논문지
    • /
    • 제10권9호
    • /
    • pp.771-778
    • /
    • 2004
  • Recently, various applications of robot system become more popular in accordance with rapid development of computer hardware/software, artificial intelligence, and automatic control technology. Formerly robots mainly have been used in industrial field, however, nowadays it is said that the robot will do an important role in the home service application. To make the robot more useful, we require further researches on implementation of natural communication method between the human and the robot system, and autonomous behavior generation. The gesture recognition technique is one of the most convenient methods for natural human-robot interaction, so it is to be solved for implementation of intelligent robot system. In this paper, we describe the state-of-the-art of advanced gesture recognition technologies for intelligent robots according to three methods; sensor based method, feature based method, appearance based method, and 3D model based method. And we also discuss some problems and real applications in the research field.

인간 관절 에너지 분석을 통한 이족로봇의 자연스러운 보행 제어 (Control Gait Pattern of Biped Robot based on Human's Sagittal Plane Gait Energy)

  • 하승석;한영준;한헌수
    • 제어로봇시스템학회논문지
    • /
    • 제14권2호
    • /
    • pp.148-155
    • /
    • 2008
  • This paper proposes a method of adaptively generating a gait pattern of biped robot. The gait synthesis is based on human's gait pattern analysis. The proposed method can easily be applied to generate the natural and stable gait pattern of any biped robot. To analyze the human's gait pattern, sequential images of the human's gait on the sagittal plane are acquired from which the gait control values are extracted. The gait pattern of biped robot on the sagittal plane is adaptively generated by a genetic algorithm using the human's gait control values. However, gait trajectories of the biped robot on the sagittal plane are not enough to construct the complete gait pattern because the biped robot moves on 3-dimension space. Therefore, the gait pattern on the frontal plane, generated from Zero Moment Point (ZMP), is added to the gait one acquired on the sagittal plane. Consequently, the natural and stable walking pattern for the biped robot is obtained, as proved by the experiments.

테이블 균형맞춤 작업이 가능한 Q-학습 기반 협력로봇 개발 (Cooperative Robot for Table Balancing Using Q-learning)

  • 김예원;강보영
    • 로봇학회논문지
    • /
    • 제15권4호
    • /
    • pp.404-412
    • /
    • 2020
  • Typically everyday human life tasks involve at least two people moving objects such as tables and beds, and the balancing of such object changes based on one person's action. However, many studies in previous work performed their tasks solely on robots without factoring human cooperation. Therefore, in this paper, we propose cooperative robot for table balancing using Q-learning that enables cooperative work between human and robot. The human's action is recognized in order to balance the table by the proposed robot whose camera takes the image of the table's state, and it performs the table-balancing action according to the recognized human action without high performance equipment. The classification of human action uses a deep learning technology, specifically AlexNet, and has an accuracy of 96.9% over 10-fold cross-validation. The experiment of Q-learning was carried out over 2,000 episodes with 200 trials. The overall results of the proposed Q-learning show that the Q function stably converged at this number of episodes. This stable convergence determined Q-learning policies for the robot actions. Video of the robotic cooperation with human over the table balancing task using the proposed Q-Learning can be found at http://ibot.knu.ac.kr/videocooperation.html.