• 제목/요약/키워드: robot soccer

검색결과 92건 처리시간 0.028초

베지어 곡선을 이용한 로봇 축구 항법의 개선 (An Improvement of Navigation in Robot Soccer using Bezier Curve)

  • 정태영;이귀형
    • 한국생산제조학회지
    • /
    • 제24권6호
    • /
    • pp.696-702
    • /
    • 2015
  • This paper suggests a new method for making a navigation path by using Bezier curve in order to improve the navigation performance used to avoid obstacles during a robot soccer game. We analyzed the advantages and disadvantages of both vector-field and limit-cycle navigation methods, which are the mostly widely used navigation methods for avoiding obstacles. To improve the disadvantages of these methods, we propose a new design technique for generating a more proper path using Bezier curve and describe its advantages. Using computer simulations and experiments, we compare the performance of vector-field navigation with that of Bezier curve navigation. The results prove that the navigation performance using Bezier curve is relatively superior to the other method.

로봇축구에서의 칼만필터를 이용한 공의 경로 추정 (Prediction of Ball Trajectory in Robot Soccer Using Kalman Filter)

  • 이진희;박태현;강근택;이원창
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1999년도 하계학술대회 논문집 G
    • /
    • pp.2998-3000
    • /
    • 1999
  • Robot soccer is a challenging research area in which multiple robots collaborate in adversarial environment to achieve specific objectives. We designed and built the robotic agents for robot soccer, especially MIROSOT. We have been developing the appropriate vision algorithm, algorithm for ball tracking and prediction, algorithms for collaboration between the robots in an uncertain dynamic environment. In this work we focus on the development of ball tracking and prediction algorithm using Kalman filter. Robustness and feasibility of the proposed algorithm is demonstrated by simulation.

  • PDF

Cell-based motion control of mobile robots for soccer game

  • Baek, Seung-Min;Han, Woong-Gie;Kuc, Tae-Yong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1997년도 한국자동제어학술회의논문집; 한국전력공사 서울연수원; 17-18 Oct. 1997
    • /
    • pp.819-824
    • /
    • 1997
  • This paper presents a cell-based motion control strategy for soccer playing mobile robots. In the central robot motion planner, the planar ground is divided into rectangular cells with variable sizes and motion indices to which direction the mobile robot should move. At every time the multiple objects-the goal gate, ball, and robots-detected, integer values of motion indices are assigned to the cells occupied by mobile robots. Once the indices being calculated, the most desirable state-action pair is chosen from the state and action sets to achieve successful soccer game strategy. The proposed strategy is computationally simple enough to be used for fast robotic soccer system.

  • PDF

Development of soccer-playing robots using visual tracking

  • Park, Sung-Wook;Kim, Eun-Hee;Kim, Do-Hyun;Oh, Jun-Ho
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1997년도 한국자동제어학술회의논문집; 한국전력공사 서울연수원; 17-18 Oct. 1997
    • /
    • pp.617-620
    • /
    • 1997
  • We have built a robot soccer system to participate in MIROSOT97. This paper represents hardware specification of our system and our strategy. We select a centralized on-line system for a soccer game. The paper explains hardware specifications of our system for later development. Also, the paper explains our strategy from two viewpoints. From the viewpoint of cooperation, some heuristic ideas are implemented. From the viewpoint of path plan, Cubic spline is used with cost function which minimized time, radius of curvature for smoothness, and obstacle potential field. Direct comparison will be realized in MIROSOT97.

  • PDF

퍼지 로직을 적용한 로봇축구 전략 및 전술 (A Robot Soccer Strategy and Tactic Using Fuzzy Logic)

  • 이정준;지동민;이원창;강근택;주문갑
    • 한국지능시스템학회논문지
    • /
    • 제16권1호
    • /
    • pp.79-85
    • /
    • 2006
  • 본 논문은 인접한 두 로봇의 위치와 역할에 따라 로봇의 행동을 결정하는 퍼지 로직 중계자를 사용한 로봇 축구의 전략 및 전술을 제안한다. 기존의 Q 학습 알고리즘은 로봇의 수에 따라 상태의 수가 기하급수적으로 증가하여, 많은 연산을 필요로 하기 때문에 실시간 연산을 필요로 하는 로봇 축구 시스템에 알맞지 않다. Modular Q 학습 알고리즘은 해당 지역을 분할하는 방법으로 상태수를 줄였는데, 여기에는 로봇들 간의 협력을 위하여 따로 중재자 알고리즘이 사용되었다. 제안된 방법은 퍼지 규칙을 사용하여 로봇들 간의 협력을 위한 중재자 알고리즘을 구현하였고, 사용된 퍼지 규칙이 간단하기 때문에 계산 량이 작아 실시간 로봇 축구에 적합하다. MiroSot 시뮬레이션을 통하여 제안된 방법의 가능성을 보인다.

로봇축구를 위한 제어변수의 자동설정 방법 (An Automatic Setting Method of Control Parameters for Robot Soccer)

  • 박효근;이정환;박세훈;박세현
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2004년도 춘계종합학술대회
    • /
    • pp.599-602
    • /
    • 2004
  • 본 논문에서는 로봇축구의 식별을 위한 초기 패치(patch)값 및 조명의 변화량에 따른 제어변수의 자동설정 방법을 연구하였다. 먼저 패치값 자동설정을 위해 찾고자하는 국부적인 패치영역을 획득하여 RCB값으로 표본화하고, 기울기 연산자를 적용하여 화소의 기울기 값을 얻는다. 그리고 기울기 값으로부터 유효 패치영역과 YUV값을 구한다. 또한 YUV 성분 중 휘도성분을 측정하여 조명의 변화량에 따른 제어변수를 설정한다. 제안된 방법을 로봇축구 영상에 적용하여 초기 패치값을 설정하였고 경기 중 조명의 변화에 적응적인 패치값 검출이 가능함을 보였다.

  • PDF

Adapative Modular Q-Learning for Agents´ Dynamic Positioning in Robot Soccer Simulation

  • Kwon, Ki-Duk;Kim, In-Cheol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.149.5-149
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent´s dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless ...

  • PDF

A Miniature Humanoid Robot That Can Play Soccor

  • Lim, Seon-Ho;Cho, Jeong-San;Sung, Young-Whee;Yi, Soo-Yeong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.628-632
    • /
    • 2003
  • An intelligent miniature humanoid robot system is designed and implemented as a platform for researching walking algorithm. The robot system consists of a mechanical robot body, a control system, a sensor system, and a human interface system. The robot has 6 dofs per leg, 3 dofs per arm, and 2 dofs for a neck, so it has total of 20 dofs to have dexterous motion capability. For the control system, a supervisory controller runs on a remote host computer to plan high level robot actions based on the vision sensor data, a main controller implemented with a DSP chip generates walking trajectories for the robot to perform the commanded action, and an auxiliary controller implemented with an FPGA chip controls 20 actuators. The robot has three types of sensors. A two-axis acceleration sensor and eight force sensing resistors for acquiring information on walking status of the robot, and a color CCD camera for acquiring information on the surroundings. As an example of an intelligent robot action, some experiments on playing soccer are performed.

  • PDF

UBA-Sot : An Approach to Control and Team Strategy in Robot Soccer

  • Santos, Juan-Miguel;Scolnik, Hugo-Daniel;Ignacio Laplagne;Sergio Daicz;Flavio Scarpettini;Hector Fassi;Claudia Castelo
    • International Journal of Control, Automation, and Systems
    • /
    • 제1권1호
    • /
    • pp.149-155
    • /
    • 2003
  • In this study, we introduce the main ideas on the control and strategy used by the robot soccer team of the Universidad de Buenos hires, UBA-Sot. The basis of our approach is to obtain a cooperative behavior, which emerges from homogeneous sets of individual behaviors. Except for the goalkeeper, the behavior set of each robot contains a small number of individual behaviors. Basically, the individual behaviors have the same core: to move from the initial to-ward the target coordinates. However, these individual behaviors differ because each one has a different precondition associated with it. Each precondition is the combination of a number of elementary ones. The aim of our approach is to answer the following questions: How can the robot compute the preconditions in time\ulcorner How are the control actions defined, which allow the robot to move from the initial toward the final coordinates\ulcorner The way we cope with these issues is, on the one hand, to use ball and robot predictors and, on the other hand, to use very fast planning. Our proposal is to use planning in such a way that the behavior obtained is closer to a reactive than a deliberative one. Simulations and experiments on real robots, based on this approach, have so far given encouraging results.

Reinforcement Learning Approach to Agents Dynamic Positioning in Robot Soccer Simulation Games

  • Kwon, Ki-Duk;Kim, In-Cheol
    • 한국시뮬레이션학회:학술대회논문집
    • /
    • 한국시뮬레이션학회 2001년도 The Seoul International Simulation Conference
    • /
    • pp.321-324
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement Beaming is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement loaming is different from supervised teaming in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement loaming algorithms like Q-learning do not require defining or loaming any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning(AMMQL) as an improvement of the existing Modular Q-Learning(MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF