• Title/Summary/Keyword: Multi-Modular Robot

Search Result 16, Processing Time 0.031 seconds

Intelligent Hybrid Modular Architecture for Multi Agent System

  • Lee, Dong-Hun;Baek, Seung-Min;Kuc, Tae-Yong;Chung, Chae-Wook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.896-902
    • /
    • 2004
  • The purpose of the study of multi-robot system is to realize multi-robot system easy for the control of robot system in case robot is adapted in the complicated environment of task structure. The purpose of the study of multi-robot system is to realize multi-robot system easy for the control of robot system in case robot is adapted in the complicated environment of task structure. To make real time control possible by making effective use of recognized information in this dynamic environment, suitable distribution of tasks should be made in consideration of function and role of each performing robots. In this paper, IHMA (Intelligent Hybrid Modular Architecture) of Intelligent combined control architecture which utilizes the merits of deliberative and reactive controllers will be suggested and its efficiency will be evaluated through the adaptation of control architecture to representative multi-robot system.

  • PDF

Mobile robot control by MNN using optimal EN (최적 EN를 사용한 MNN에 의한 Mobile Robot제어)

  • Choi, Woo-Kyung;Kim, Seong-Joo;Seo, Jae-Yong;Jeon, Hong-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.2
    • /
    • pp.186-191
    • /
    • 2003
  • Skills in tracing of the MR divide into following, approaching, avoiding and warning and so on. It is difficult to have all these skills learned as neural network. To make this up for, skills consisted of each module, and Mobile Robot was controlled by the output of module adequate for the situation. A mobile Robot was equipped multi-ultrasonic sensor and a USB Camera, which can be in place of human sense, and the measured environment information data is learned through Modular Neural Network. MNN consisted of optimal combination of activation function in the Expert Network and its structure seemed to improve learning time and errors. The Gating Network(GN) used to control output values of the MNN by switching for angle and speed of the robot. In the paper, EN of Modular Neural network was designed optimal combination. Traveling with a real MR was performed repeatedly to verity the usefulness of the MNN which was proposed in this paper. The robot was properly controlled and driven by the result value and the experimental is rewarded with good fruits.

Performance Evaluation of Multi-Hop Communication Based on a Mobile Multi-Robot System in a Subterranean Laneway

  • Liu, Qing-Ling;Oh, Duk-Hwan
    • Journal of Information Processing Systems
    • /
    • v.8 no.3
    • /
    • pp.471-482
    • /
    • 2012
  • For disaster exploration and surveillance application, this paper aims to present a novel application of a multi-robot agent based on WSN and to evaluate a multi-hop communication caused by the robotics correspondingly, which are used in the uncertain and unknown subterranean tunnel. A Primary-Scout Multi-Robot System (PS-MRS) was proposed. A chain topology in a subterranean environment was implemented using a trimmed ZigBee2006 protocol stack to build the multi-hop communication network. The ZigBee IC-CC2530 modular circuit was adapted by mounting it on the PS-MRS. A physical experiment based on the strategy of PS-MRS was used in this paper to evaluate the efficiency of multi-hop communication and to realize the delivery of data packets in an unknown and uncertain underground laboratory environment.

Dynamic Positioning of Robot Soccer Simulation Game Agents using Reinforcement learning

  • Kwon, Ki-Duk;Cho, Soo-Sin;Kim, In-Cheol
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.59-64
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to chose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state- action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem. we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL)as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

Reinforcement Learning Approach to Agents Dynamic Positioning in Robot Soccer Simulation Games

  • Kwon, Ki-Duk;Kim, In-Cheol
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 2001.10a
    • /
    • pp.321-324
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement Beaming is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement loaming is different from supervised teaming in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement loaming algorithms like Q-learning do not require defining or loaming any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning(AMMQL) as an improvement of the existing Modular Q-Learning(MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

Development of Multi-Link Mobile Robot for Rough Road Driving (험로 주행을 위한 다중모듈 로봇의 설계)

  • Paek, Ryu-Gwang;Han, Kyong-Ho;Shin, In-Chul
    • Journal of IKEEE
    • /
    • v.14 no.2
    • /
    • pp.58-63
    • /
    • 2010
  • In this paper, design and implementation of multi-modular robots of similar structure to the arthropods for rock path driving. Each module corresponds to an arthropod joint, which has an independent power supply and control equipment including drive and short-range Zigbee wireless communication that were implemented. On various directions and paths each module has the same driving direction and each module is controlled to operate or not by wireless communication. Depending on path condition, each module calculate the speed and torque and depending on the slope of a rough path, the number of active modules can be changed for the efficient driving on a variety of roads conditions. Experimental driving through rough road model, variable multi-module robot is implemented.

Tracing Algorithm for Intelligent Snake-like Robot System

  • Choi, Woo-Kyung;Kim, Seong-Joo;Jeon, Hong-Tae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.486-491
    • /
    • 2005
  • There come various types of robot with researches for mobile robot. This paper introduces the multi-joint snake robot having 16 degree of freedom and composing of eight-axis. The biological snake robot uses the forward movement friction and the proposed artificial snake robot uses the un-powered wheel instead of the body of snake. To determine the enable joint angle of each joint, the controller inputs are considered such as color and distance using PC Camera and ultra-sonic sensor module, respectively. The movement method of snake robot is sequential moving from head to tail through body. The target for movement direction is decided by a certain article be displayed in the PC Camera. In moving toward that target, if there is any obstacle then the snake robot can avoid by itself. In this paper, we show the method of snake robot for tracing the target with experiment.

  • PDF

Adapative Modular Q-Learning for Agents´ Dynamic Positioning in Robot Soccer Simulation

  • Kwon, Ki-Duk;Kim, In-Cheol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.149.5-149
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent´s dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless ...

  • PDF

Research about Intelligent Snake Robot (지능형 뱀 로봇에 관한 연구)

  • Kim, Seong-Joo;Kim, Jong-Soo;Jeon, Hong-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.1
    • /
    • pp.70-75
    • /
    • 2003
  • There come various types of robot with researches for mobile robot. This paper introduces the multi-joint snake robot having 16 degree of freedom and composing of eight-axis. The biological snake robot uses the forward movement friction and the proposed artificial snake robot uses the un-powered wheel instead of the body of snake. To determine the enable joint angle of each joint, the controller inputs are considered such as color and distance using PC Camera and ultra-sonic sensor module, respectively. The movement method of snake robot is sequential moving from head to tail through body. The target for movement direction is decided by a certain article be displayed in the PC Camera. In moving toward that target, if there is any obstacle then the snake robot can avoid by itself. In this paper, we show the method of snake robot for tracing the target with experiment.