• Title/Summary/Keyword: Robot Soccer System

Search Result 49, Processing Time 0.028 seconds

The improvement of MIRAGE I robot system (MIRAGE I 로봇 시스템의 개선)

  • 한국현;서보익;오세종
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.605-607
    • /
    • 1997
  • According to the way of the robot control, the robot systems of all the teams which participate in the MIROSOT can be divided into three categories : the remote brainless system, the vision-based system and the robot-based system. The MIRAGE I robot control system uses the last one, the robot-based system. In the robot-based system the host computer with the vision system transmits the data on only the location of the ball and the robots. Based on this robot control method, we took part in the MIROSOT '96 and the MIROSOT '97.

  • PDF

A Robot Soccer Strategy and Tactic Using Fuzzy Logic (퍼지 로직을 적용한 로봇축구 전략 및 전술)

  • Lee, Jeong-Jun;Ji, Dong-Min;Lee, Won-Chang;Kang, Geun-Taek;Joo, Moon G.
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.1
    • /
    • pp.79-85
    • /
    • 2006
  • This paper presents a strategy and tactic for robot soccer using furry logic mediator that determines robot action depending on the positions and the roles of adjacent two robots. Conventional Q-learning algorithm, where the number of states increases exponentially with the number of robots, is not suitable for a robot soccer system, because it needs so much calculation that processing cannot be accomplished in real time. A modular Q-teaming algorithm reduces a number of states by partitioning the concerned area, where mediator algorithm for cooperation of robots is used additionally. The proposed scheme implements the mediator algorithm among robots by fuzzy logic system, where simple fuzzy rules make the calculation easy and hence proper for robot soccer system. The simulation of MiroSot shows the feasibility of the proposed scheme.

Cell-based motion control of mobile robots for soccer game

  • Baek, Seung-Min;Han, Woong-Gie;Kuc, Tae-Yong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.819-824
    • /
    • 1997
  • This paper presents a cell-based motion control strategy for soccer playing mobile robots. In the central robot motion planner, the planar ground is divided into rectangular cells with variable sizes and motion indices to which direction the mobile robot should move. At every time the multiple objects-the goal gate, ball, and robots-detected, integer values of motion indices are assigned to the cells occupied by mobile robots. Once the indices being calculated, the most desirable state-action pair is chosen from the state and action sets to achieve successful soccer game strategy. The proposed strategy is computationally simple enough to be used for fast robotic soccer system.

  • PDF

Research of soccer robot system strategies

  • Sugisaka, Masanori;Kiyomatsu, Toshiro;Hara, Masayoshi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.92.4-92
    • /
    • 2002
  • In this paper, as an ideal test bed for studies on multi-agent system, the multiple micro robot soccer playing system is introduced at first. The construction of such experimental system has involved lots of kinds of challenges such as sensors fusing, robot designing, vision processing, motion controlling, and especially the cooperation planning of those robots. So in this paper we want to stress emphasis on how to evolve the system automatically based on the model of behavior-based learning in multi-agent domain. At first we present such model in common sense and then apply it to the realistic experimental system . At last we will give some results showing that the proposed approach is feasi...

  • PDF

Multi-robot control using Petri-net

  • Park, Se-Woong;Kuc, Tae-Yong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.59.5-59
    • /
    • 2001
  • Multi-agent robot system is the system which executes by cooperating with each robots and controlling several robots. Capability and function of each robot must be considered for cooperation behavior. Furthermore, it is necessary to analyze the given environment and to replace complex task with some simple tasks. Analysis of the given environment and role assignment for the given tasks are composed of discret event. In this paper, the hierarchical controller for multi-agent robot system using the petri-net state diagram is proposed. The proposed modeling method is implemented for soccer robot system. The effectiveness of proposed modeling method is shown through experiment.

  • PDF

Intelligent Hybrid Modular Architecture for Multi Agent System

  • Lee, Dong-Hun;Baek, Seung-Min;Kuc, Tae-Yong;Chung, Chae-Wook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.896-902
    • /
    • 2004
  • The purpose of the study of multi-robot system is to realize multi-robot system easy for the control of robot system in case robot is adapted in the complicated environment of task structure. The purpose of the study of multi-robot system is to realize multi-robot system easy for the control of robot system in case robot is adapted in the complicated environment of task structure. To make real time control possible by making effective use of recognized information in this dynamic environment, suitable distribution of tasks should be made in consideration of function and role of each performing robots. In this paper, IHMA (Intelligent Hybrid Modular Architecture) of Intelligent combined control architecture which utilizes the merits of deliberative and reactive controllers will be suggested and its efficiency will be evaluated through the adaptation of control architecture to representative multi-robot system.

  • PDF

Design and implementation of Robot Soccer Agent Based on Reinforcement Learning (강화 학습에 기초한 로봇 축구 에이전트의 설계 및 구현)

  • Kim, In-Cheol
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.139-146
    • /
    • 2002
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless these algorithms can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL) as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. In this paper we use the AMMQL algorithn as a learning method for dynamic positioning of the robot soccer agent, and implement a robot soccer agent system called Cogitoniks.

Dynamic Positioning of Robot Soccer Simulation Game Agents using Reinforcement learning

  • Kwon, Ki-Duk;Cho, Soo-Sin;Kim, In-Cheol
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.59-64
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to chose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state- action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem. we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL)as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

A Posture Control for Two Wheeled Mobile Robots

  • Shim, Hyun-Sik;Sung, Yoon-Gyeoung
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.3
    • /
    • pp.201-206
    • /
    • 2000
  • In this paper, a posture control for nonholonomic mobile robots is proposed with an empirical basis. In order to obtain fast and consecutive motions in realistic applications, the motion requirements of a mobile robot are defined. Under the assumption of a velocity controller designed with the selection guidance of control parameters, the algorithm of posture control is presented and experimentally demonstrated for practicality and effectiveness.

  • PDF

Learning soccer robot using genetic programming

  • Wang, Xiaoshu;Sugisaka, Masanori
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1999.10a
    • /
    • pp.292-297
    • /
    • 1999
  • Evolving in artificial agent is an extremely difficult problem, but on the other hand, a challenging task. At present the studies mainly centered on single agent learning problem. In our case, we use simulated soccer to investigate multi-agent cooperative learning. Consider the fundamental differences in learning mechanism, existing reinforcement learning algorithms can be roughly classified into two types-that based on evaluation functions and that of searching policy space directly. Genetic Programming developed from Genetic Algorithms is one of the most well known approaches belonging to the latter. In this paper, we give detailed algorithm description as well as data construction that are necessary for learning single agent strategies at first. In following step moreover, we will extend developed methods into multiple robot domains. game. We investigate and contrast two different methods-simple team learning and sub-group loaming and conclude the paper with some experimental results.

  • PDF