• Title/Summary/Keyword: robot soccer

Search Result 92, Processing Time 0.034 seconds

An Improvement of Navigation in Robot Soccer using Bezier Curve (베지어 곡선을 이용한 로봇 축구 항법의 개선)

  • Jung, Tae-Young;Lee, Gui-Hyung
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.24 no.6
    • /
    • pp.696-702
    • /
    • 2015
  • This paper suggests a new method for making a navigation path by using Bezier curve in order to improve the navigation performance used to avoid obstacles during a robot soccer game. We analyzed the advantages and disadvantages of both vector-field and limit-cycle navigation methods, which are the mostly widely used navigation methods for avoiding obstacles. To improve the disadvantages of these methods, we propose a new design technique for generating a more proper path using Bezier curve and describe its advantages. Using computer simulations and experiments, we compare the performance of vector-field navigation with that of Bezier curve navigation. The results prove that the navigation performance using Bezier curve is relatively superior to the other method.

Prediction of Ball Trajectory in Robot Soccer Using Kalman Filter (로봇축구에서의 칼만필터를 이용한 공의 경로 추정)

  • Lee, Jin-Hee;Park, Tae-Hyun;Kang, Geun-Taek;Lee, Won-Chang
    • Proceedings of the KIEE Conference
    • /
    • 1999.07g
    • /
    • pp.2998-3000
    • /
    • 1999
  • Robot soccer is a challenging research area in which multiple robots collaborate in adversarial environment to achieve specific objectives. We designed and built the robotic agents for robot soccer, especially MIROSOT. We have been developing the appropriate vision algorithm, algorithm for ball tracking and prediction, algorithms for collaboration between the robots in an uncertain dynamic environment. In this work we focus on the development of ball tracking and prediction algorithm using Kalman filter. Robustness and feasibility of the proposed algorithm is demonstrated by simulation.

  • PDF

Cell-based motion control of mobile robots for soccer game

  • Baek, Seung-Min;Han, Woong-Gie;Kuc, Tae-Yong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.819-824
    • /
    • 1997
  • This paper presents a cell-based motion control strategy for soccer playing mobile robots. In the central robot motion planner, the planar ground is divided into rectangular cells with variable sizes and motion indices to which direction the mobile robot should move. At every time the multiple objects-the goal gate, ball, and robots-detected, integer values of motion indices are assigned to the cells occupied by mobile robots. Once the indices being calculated, the most desirable state-action pair is chosen from the state and action sets to achieve successful soccer game strategy. The proposed strategy is computationally simple enough to be used for fast robotic soccer system.

  • PDF

Development of soccer-playing robots using visual tracking

  • Park, Sung-Wook;Kim, Eun-Hee;Kim, Do-Hyun;Oh, Jun-Ho
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.617-620
    • /
    • 1997
  • We have built a robot soccer system to participate in MIROSOT97. This paper represents hardware specification of our system and our strategy. We select a centralized on-line system for a soccer game. The paper explains hardware specifications of our system for later development. Also, the paper explains our strategy from two viewpoints. From the viewpoint of cooperation, some heuristic ideas are implemented. From the viewpoint of path plan, Cubic spline is used with cost function which minimized time, radius of curvature for smoothness, and obstacle potential field. Direct comparison will be realized in MIROSOT97.

  • PDF

A Robot Soccer Strategy and Tactic Using Fuzzy Logic (퍼지 로직을 적용한 로봇축구 전략 및 전술)

  • Lee, Jeong-Jun;Ji, Dong-Min;Lee, Won-Chang;Kang, Geun-Taek;Joo, Moon G.
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.1
    • /
    • pp.79-85
    • /
    • 2006
  • This paper presents a strategy and tactic for robot soccer using furry logic mediator that determines robot action depending on the positions and the roles of adjacent two robots. Conventional Q-learning algorithm, where the number of states increases exponentially with the number of robots, is not suitable for a robot soccer system, because it needs so much calculation that processing cannot be accomplished in real time. A modular Q-teaming algorithm reduces a number of states by partitioning the concerned area, where mediator algorithm for cooperation of robots is used additionally. The proposed scheme implements the mediator algorithm among robots by fuzzy logic system, where simple fuzzy rules make the calculation easy and hence proper for robot soccer system. The simulation of MiroSot shows the feasibility of the proposed scheme.

An Automatic Setting Method of Control Parameters for Robot Soccer (로봇축구를 위한 제어변수의 자동설정 방법)

  • 박효근;이정환;박세훈;박세현
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.599-602
    • /
    • 2004
  • In this paper, an automatic setting method of control parameters for robot scorer is proposed. First we acquisited some local image lesions including robots and ball patch, and sampled the regions to RCB value. Edge operator is applied to get magnitude of gradient at each pixel. Some effective patch regions can be detected by magnitude of gradient, and YUV value at each pixel in patch lesions can be obtained. We can determine control parameters of robot soccer using luminance component of YUV. The proposed method is applied to robot soccer image to detect initial patch value and get control parameters adaptively in light variation.

  • PDF

Adapative Modular Q-Learning for Agents´ Dynamic Positioning in Robot Soccer Simulation

  • Kwon, Ki-Duk;Kim, In-Cheol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.149.5-149
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent´s dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless ...

  • PDF

A Miniature Humanoid Robot That Can Play Soccor

  • Lim, Seon-Ho;Cho, Jeong-San;Sung, Young-Whee;Yi, Soo-Yeong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.628-632
    • /
    • 2003
  • An intelligent miniature humanoid robot system is designed and implemented as a platform for researching walking algorithm. The robot system consists of a mechanical robot body, a control system, a sensor system, and a human interface system. The robot has 6 dofs per leg, 3 dofs per arm, and 2 dofs for a neck, so it has total of 20 dofs to have dexterous motion capability. For the control system, a supervisory controller runs on a remote host computer to plan high level robot actions based on the vision sensor data, a main controller implemented with a DSP chip generates walking trajectories for the robot to perform the commanded action, and an auxiliary controller implemented with an FPGA chip controls 20 actuators. The robot has three types of sensors. A two-axis acceleration sensor and eight force sensing resistors for acquiring information on walking status of the robot, and a color CCD camera for acquiring information on the surroundings. As an example of an intelligent robot action, some experiments on playing soccer are performed.

  • PDF

UBA-Sot : An Approach to Control and Team Strategy in Robot Soccer

  • Santos, Juan-Miguel;Scolnik, Hugo-Daniel;Ignacio Laplagne;Sergio Daicz;Flavio Scarpettini;Hector Fassi;Claudia Castelo
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.1
    • /
    • pp.149-155
    • /
    • 2003
  • In this study, we introduce the main ideas on the control and strategy used by the robot soccer team of the Universidad de Buenos hires, UBA-Sot. The basis of our approach is to obtain a cooperative behavior, which emerges from homogeneous sets of individual behaviors. Except for the goalkeeper, the behavior set of each robot contains a small number of individual behaviors. Basically, the individual behaviors have the same core: to move from the initial to-ward the target coordinates. However, these individual behaviors differ because each one has a different precondition associated with it. Each precondition is the combination of a number of elementary ones. The aim of our approach is to answer the following questions: How can the robot compute the preconditions in time\ulcorner How are the control actions defined, which allow the robot to move from the initial toward the final coordinates\ulcorner The way we cope with these issues is, on the one hand, to use ball and robot predictors and, on the other hand, to use very fast planning. Our proposal is to use planning in such a way that the behavior obtained is closer to a reactive than a deliberative one. Simulations and experiments on real robots, based on this approach, have so far given encouraging results.

Reinforcement Learning Approach to Agents Dynamic Positioning in Robot Soccer Simulation Games

  • Kwon, Ki-Duk;Kim, In-Cheol
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 2001.10a
    • /
    • pp.321-324
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement Beaming is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement loaming is different from supervised teaming in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement loaming algorithms like Q-learning do not require defining or loaming any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning(AMMQL) as an improvement of the existing Modular Q-Learning(MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF