• 제목/요약/키워드: Robot-based Learning

검색결과 479건 처리시간 0.144초

Topolgical Map을 이용한 이동로봇의 행위기반 학습제어기 (Behavior-based Learning Controller for Mobile Robot using Topological Map)

  • 이석주;문정현;한신;조영조;김광배
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2000년도 하계학술대회 논문집 D
    • /
    • pp.2834-2836
    • /
    • 2000
  • This paper introduces the behavior-based learning controller for mobile robot using topological map. When the mobile robot navigates to the goal position, it utilizes given information of topological map and its location. Under navigating in unknown environment, the robot classifies its situation using ultrasonic sensor data, and calculates each motor schema multiplied by respective gain for all behaviors, and then takes an action according to the vector sum of all the motor schemas. After an action, the information of the robot's location in given topological map is incorporated to the learning module to adapt the weights of the neural network for gain learning. As a result of simulation, the robot navigates to the goal position successfully after iterative gain learning with topological information.

  • PDF

휴머노이드 로봇 HUMIC 개발 및 Gazebo 시뮬레이터를 이용한 강화학습 기반 로봇 행동 지능 연구 (Development of Humanoid Robot HUMIC and Reinforcement Learning-based Robot Behavior Intelligence using Gazebo Simulator)

  • 김영기;한지형
    • 로봇학회논문지
    • /
    • 제16권3호
    • /
    • pp.260-269
    • /
    • 2021
  • To verify performance or conduct experiments using actual robots, a lot of costs are needed such as robot hardware, experimental space, and time. Therefore, a simulation environment is an essential tool in robotics research. In this paper, we develop the HUMIC simulator using ROS and Gazebo. HUMIC is a humanoid robot, which is developed by HCIR Lab., for human-robot interaction and an upper body of HUMIC is similar to humans with a head, body, waist, arms, and hands. The Gazebo is an open-source three-dimensional robot simulator that provides the ability to simulate robots accurately and efficiently along with simulated indoor and outdoor environments. We develop a GUI for users to easily simulate and manipulate the HUMIC simulator. Moreover, we open the developed HUMIC simulator and GUI for other robotics researchers to use. We test the developed HUMIC simulator for object detection and reinforcement learning-based navigation tasks successfully. As a further study, we plan to develop robot behavior intelligence based on reinforcement learning algorithms using the developed simulator, and then apply it to the real robot.

이족보행로봇의 걸음새 제어를 위한 지능형 학습 제어기의 구현 (Implementation of an Intelligent Learning Controller for Gait Control of Biped Walking Robot)

  • 임동철;국태용
    • 전기학회논문지P
    • /
    • 제59권1호
    • /
    • pp.29-34
    • /
    • 2010
  • This paper presents an intelligent learning controller for repetitive walking motion of biped walking robot. The proposed learning controller consists of an iterative learning controller and a direct learning controller. In the iterative learning controller, the PID feedback controller takes part in stabilizing the learning control system while the feedforward learning controller plays a role in compensating for the nonlinearity of uncertain biped walking robot. In the direct learning controller, the desired learning input for new joint trajectories with different time scales from the learned ones is generated directly based on the previous learned input profiles obtained from the iterative learning process. The effectiveness and tracking performance of the proposed learning controller to biped robotic motion is shown by mathematical analysis and computer simulation with 12 DOF biped walking robot.

효과적인 인간-로봇 상호작용을 위한 딥러닝 기반 로봇 비전 자연어 설명문 생성 및 발화 기술 (Robot Vision to Audio Description Based on Deep Learning for Effective Human-Robot Interaction)

  • 박동건;강경민;배진우;한지형
    • 로봇학회논문지
    • /
    • 제14권1호
    • /
    • pp.22-30
    • /
    • 2019
  • For effective human-robot interaction, robots need to understand the current situation context well, but also the robots need to transfer its understanding to the human participant in efficient way. The most convenient way to deliver robot's understanding to the human participant is that the robot expresses its understanding using voice and natural language. Recently, the artificial intelligence for video understanding and natural language process has been developed very rapidly especially based on deep learning. Thus, this paper proposes robot vision to audio description method using deep learning. The applied deep learning model is a pipeline of two deep learning models for generating natural language sentence from robot vision and generating voice from the generated natural language sentence. Also, we conduct the real robot experiment to show the effectiveness of our method in human-robot interaction.

A study on Indirect Adaptive Decentralized Learning Control of the Vertical Multiple Dynamic System

  • Lee, Soo-Cheol;Park, Seok-Sun;Lee, Jeh-Won
    • International Journal of Precision Engineering and Manufacturing
    • /
    • 제7권1호
    • /
    • pp.62-66
    • /
    • 2006
  • The learning control develops controllers that learn to improve their performance at executing a given task, based on experience performing this specific task. In a previous work, the authors presented an iterative precision of linear decentralized learning control based on p-integrated learning method for the vertical dynamic multiple systems. This paper develops an indirect decentralized learning control based on adaptive control method. The original motivation of the learning control field was learning in robots doing repetitive tasks such as an assembly line works. This paper starts with decentralized discrete time systems, and progresses to the robot application, modeling the robot as a time varying linear system in the neighborhood of the nominal trajectory, and using the usual robot controllers that are decentralized, treating each link as if it is independent of any coupling with other links. Some techniques will show up in the numerical simulation for vertical dynamic robot. The methods of learning system are shown for the iterative precision of each link.

이족 휴머노이드 로봇의 유연한 보행을 위한 학습기반 뉴로-퍼지시스템의 응용 (Use of Learning Based Neuro-fuzzy System for Flexible Walking of Biped Humanoid Robot)

  • 김동원;강태구;황상현;박귀태
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년 학술대회 논문집 정보 및 제어부문
    • /
    • pp.539-541
    • /
    • 2006
  • Biped locomotion is a popular research area in robotics due to the high adaptability of a walking robot in an unstructured environment. When attempting to automate the motion planning process for a biped walking robot, one of the main issues is assurance of dynamic stability of motion. This can be categorized into three general groups: body stability, body path stability, and gait stability. A zero moment point (ZMP), a point where the total forces and moments acting on the robot are zero, is usually employed as a basic component for dynamically stable motion. In this rarer, learning based neuro-fuzzy systems have been developed and applied to model ZMP trajectory of a biped walking robot. As a result, we can provide more improved insight into physical walking mechanisms.

  • PDF

테이블 균형맞춤 작업이 가능한 Q-학습 기반 협력로봇 개발 (Cooperative Robot for Table Balancing Using Q-learning)

  • 김예원;강보영
    • 로봇학회논문지
    • /
    • 제15권4호
    • /
    • pp.404-412
    • /
    • 2020
  • Typically everyday human life tasks involve at least two people moving objects such as tables and beds, and the balancing of such object changes based on one person's action. However, many studies in previous work performed their tasks solely on robots without factoring human cooperation. Therefore, in this paper, we propose cooperative robot for table balancing using Q-learning that enables cooperative work between human and robot. The human's action is recognized in order to balance the table by the proposed robot whose camera takes the image of the table's state, and it performs the table-balancing action according to the recognized human action without high performance equipment. The classification of human action uses a deep learning technology, specifically AlexNet, and has an accuracy of 96.9% over 10-fold cross-validation. The experiment of Q-learning was carried out over 2,000 episodes with 200 trials. The overall results of the proposed Q-learning show that the Q function stably converged at this number of episodes. This stable convergence determined Q-learning policies for the robot actions. Video of the robotic cooperation with human over the table balancing task using the proposed Q-Learning can be found at http://ibot.knu.ac.kr/videocooperation.html.

메타강화학습을 이용한 수중로봇 매니퓰레이터 제어 (Control for Manipulator of an Underwater Robot Using Meta Reinforcement Learning)

  • 문지윤;문장혁;배성훈
    • 한국전자통신학회논문지
    • /
    • 제16권1호
    • /
    • pp.95-100
    • /
    • 2021
  • 본 논문에서는 수중 건설 로봇을 제어하기 위한 모델 기반 메타 강화 학습 방법을 제안한다. 모델 기반 메타 강화 학습은 실제 응용 프로그램의 최근 경험을 사용하여 모델을 빠르게 업데이트한다. 다음으로, 대상 위치에 도달하기 위해 매니퓰레이터의 제어 입력을 계산하는 모델 예측 제어로 모델을 전송한다. MuJoCo 및 Gazebo를 사용하여 모델 기반 메타 강화 학습을 위한 시뮬레이션 환경을 구축하였으며 수중 건설 로봇의 실제 제어 환경에서의 모델 불확실성을 포함하여 제안한 방법을 검증하였다.

원격로봇 보조 언어교육의 아동 상호작용 질적 탐색 (Qualitative Exploration on Children's Interactions in Telepresence Robot Assisted Language Learning)

  • 신경완;한정혜
    • 한국융합학회논문지
    • /
    • 제8권3호
    • /
    • pp.177-184
    • /
    • 2017
  • 이 논문은 원격언어교육으로서 2가지 형태의 비디오 영상수업과 로봇영상 수업에 따른 아이와 로봇상호작용을 연구한다. 원격지의 미국 아이와 6명의 한국 아이들로 실험수업을 진행했으며, 일대일 인터뷰를 통한 나래이션 및 관찰분석을 하였다. 실험결과 로봇영상 수업이 2가지 형태의 비디오 영상수업보다 활발한 상호작용을 보였다.

멀티모달 상호작용 중심의 로봇기반교육 콘텐츠를 활용한 r-러닝 시스템 사용의도 분석 (A Study on the Intention to Use a Robot-based Learning System with Multi-Modal Interaction)

  • 오준석;조혜경
    • 제어로봇시스템학회논문지
    • /
    • 제20권6호
    • /
    • pp.619-624
    • /
    • 2014
  • This paper introduces a robot-based learning system which is designed to teach multiplication to children. In addition to a small humanoid and a smart device delivering educational content, we employ a type of mixed-initiative operation which provides enhanced multi-modal cognition to the r-learning system through human intervention. To investigate major factors that influence people's intention to use the r-learning system and to see how the multi-modality affects the connections, we performed a user study based on TAM (Technology Acceptance Model). The results support the fact that the quality of the system and the natural interaction are key factors for the r-learning system to be used, and they also reveal very interesting implications related to the human behaviors.