• Title/Summary/Keyword: robot soccer

Search Result 92, Processing Time 0.038 seconds

A Hierachical Controller for Soccer Robots (축구로봇을 위한 계층적 제어기)

  • Lee, In-Jae;Baek, Seung-Min;Sohn, Kyung-Oh;Kuc, Tae-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.6 no.9
    • /
    • pp.803-812
    • /
    • 2000
  • In this paper we introduce a model based centralized hierarchical controller for cooperative team of soccerplaying multiple mobile robots. The hierarchical controller is composed of high-level and low-level controllers. Using the coordinates information of objects from the vision are simple models of multiple mobile tobots on the playground. Subsequently, the high level controller selects and action model corresponding to the perceived state transition model and generates subgoal and goal-velocity, from which the low level controller generates trajectory of each wheel velocity of the robot. This two layered simplicity. The feasubility of the control strategy has been demonstrated in an implementation for real soccer games at a MiroSot league.

  • PDF

The improvement of MIRAGE I robot system (MIRAGE I 로봇 시스템의 개선)

  • 한국현;서보익;오세종
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.605-607
    • /
    • 1997
  • According to the way of the robot control, the robot systems of all the teams which participate in the MIROSOT can be divided into three categories : the remote brainless system, the vision-based system and the robot-based system. The MIRAGE I robot control system uses the last one, the robot-based system. In the robot-based system the host computer with the vision system transmits the data on only the location of the ball and the robots. Based on this robot control method, we took part in the MIROSOT '96 and the MIROSOT '97.

  • PDF

Design and implementation of Robot Soccer Agent Based on Reinforcement Learning (강화 학습에 기초한 로봇 축구 에이전트의 설계 및 구현)

  • Kim, In-Cheol
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.139-146
    • /
    • 2002
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless these algorithms can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL) as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. In this paper we use the AMMQL algorithn as a learning method for dynamic positioning of the robot soccer agent, and implement a robot soccer agent system called Cogitoniks.

Design of Behavior-based Soccer Robot (행위 기반 제어에 의한 축구로봇 설계)

  • Kim, Jong-Woo;Sung, Young-Hwe;Choi, Han-Go
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2005.11a
    • /
    • pp.365-368
    • /
    • 2005
  • This paper describes the implementation of autonomy in the motion of a small size human robot. Traditional modeling of environment and concept of moving planning have limitations to adapt the change of environment and to implement in real-time operation. To overcome these limitations, we designed a behavior-based control algorithm and applied to robot soccer. Based on experiment, we verify that the behavior-based control algorithm works well in the change of environment.

  • PDF

Research of soccer robot system strategies

  • Sugisaka, Masanori;Kiyomatsu, Toshiro;Hara, Masayoshi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.92.4-92
    • /
    • 2002
  • In this paper, as an ideal test bed for studies on multi-agent system, the multiple micro robot soccer playing system is introduced at first. The construction of such experimental system has involved lots of kinds of challenges such as sensors fusing, robot designing, vision processing, motion controlling, and especially the cooperation planning of those robots. So in this paper we want to stress emphasis on how to evolve the system automatically based on the model of behavior-based learning in multi-agent domain. At first we present such model in common sense and then apply it to the realistic experimental system . At last we will give some results showing that the proposed approach is feasi...

  • PDF

Multi-robot control using Petri-net

  • Park, Se-Woong;Kuc, Tae-Yong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.59.5-59
    • /
    • 2001
  • Multi-agent robot system is the system which executes by cooperating with each robots and controlling several robots. Capability and function of each robot must be considered for cooperation behavior. Furthermore, it is necessary to analyze the given environment and to replace complex task with some simple tasks. Analysis of the given environment and role assignment for the given tasks are composed of discret event. In this paper, the hierarchical controller for multi-agent robot system using the petri-net state diagram is proposed. The proposed modeling method is implemented for soccer robot system. The effectiveness of proposed modeling method is shown through experiment.

  • PDF

Dynamic Positioning of Robot Soccer Simulation Game Agents using Reinforcement learning

  • Kwon, Ki-Duk;Cho, Soo-Sin;Kim, In-Cheol
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.59-64
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to chose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state- action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem. we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL)as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

Learning soccer robot using genetic programming

  • Wang, Xiaoshu;Sugisaka, Masanori
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1999.10a
    • /
    • pp.292-297
    • /
    • 1999
  • Evolving in artificial agent is an extremely difficult problem, but on the other hand, a challenging task. At present the studies mainly centered on single agent learning problem. In our case, we use simulated soccer to investigate multi-agent cooperative learning. Consider the fundamental differences in learning mechanism, existing reinforcement learning algorithms can be roughly classified into two types-that based on evaluation functions and that of searching policy space directly. Genetic Programming developed from Genetic Algorithms is one of the most well known approaches belonging to the latter. In this paper, we give detailed algorithm description as well as data construction that are necessary for learning single agent strategies at first. In following step moreover, we will extend developed methods into multiple robot domains. game. We investigate and contrast two different methods-simple team learning and sub-group loaming and conclude the paper with some experimental results.

  • PDF

Intelligent Hybrid Modular Architecture for Multi Agent System

  • Lee, Dong-Hun;Baek, Seung-Min;Kuc, Tae-Yong;Chung, Chae-Wook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.896-902
    • /
    • 2004
  • The purpose of the study of multi-robot system is to realize multi-robot system easy for the control of robot system in case robot is adapted in the complicated environment of task structure. The purpose of the study of multi-robot system is to realize multi-robot system easy for the control of robot system in case robot is adapted in the complicated environment of task structure. To make real time control possible by making effective use of recognized information in this dynamic environment, suitable distribution of tasks should be made in consideration of function and role of each performing robots. In this paper, IHMA (Intelligent Hybrid Modular Architecture) of Intelligent combined control architecture which utilizes the merits of deliberative and reactive controllers will be suggested and its efficiency will be evaluated through the adaptation of control architecture to representative multi-robot system.

  • PDF