• 제목/요약/키워드: multi-agent Q-learning

검색결과 30건 처리시간 0.037초

Avoidance Behavior of Small Mobile Robots based on the Successive Q-Learning

  • Kim, Min-Soo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.164.1-164
    • /
    • 2001
  • Q-learning is a recent reinforcement learning algorithm that does not need a modeling of environment and it is a suitable approach to learn behaviors for autonomous agents. But when it is applied to multi-agent learning with many I/O states, it is usually too complex and slow. To overcome this problem in the multi-agent learning system, we propose the successive Q-learning algorithm. Successive Q-learning algorithm divides state-action pairs, which agents can have, into several Q-functions, so it can reduce complexity and calculation amounts. This algorithm is suitable for multi-agent learning in a dynamically changing environment. The proposed successive Q-learning algorithm is applied to the prey-predator problem with the one-prey and two-predators, and its effectiveness is verified from the efficient avoidance ability of the prey agent.

  • PDF

Multi-agent Q-learning based Admission Control Mechanism in Heterogeneous Wireless Networks for Multiple Services

  • Chen, Jiamei;Xu, Yubin;Ma, Lin;Wang, Yao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권10호
    • /
    • pp.2376-2394
    • /
    • 2013
  • In order to ensure both of the whole system capacity and users QoS requirements in heterogeneous wireless networks, admission control mechanism should be well designed. In this paper, Multi-agent Q-learning based Admission Control Mechanism (MQACM) is proposed to handle new and handoff call access problems appropriately. MQACM obtains the optimal decision policy by using an improved form of single-agent Q-learning method, Multi-agent Q-learning (MQ) method. MQ method is creatively introduced to solve the admission control problem in heterogeneous wireless networks in this paper. In addition, different priorities are allocated to multiple services aiming to make MQACM perform even well in congested network scenarios. It can be observed from both analysis and simulation results that our proposed method not only outperforms existing schemes with enhanced call blocking probability and handoff dropping probability performance, but also has better network universality and stability than other schemes.

Q-learning for intersection traffic flow Control based on agents

  • 주선;정길도
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2009년도 정보 및 제어 심포지움 논문집
    • /
    • pp.94-96
    • /
    • 2009
  • In this paper, we present the Q-learning method for adaptive traffic signal control on the basis of multi-agent technology. The structure is composed of sixphase agents and one intersection agent. Wireless communication network provides the possibility of the cooperation of agents. As one kind of reinforcement learning, Q-learning is adopted as the algorithm of the control mechanism, which can acquire optical control strategies from delayed reward; furthermore, we adopt dynamic learning method instead of static method, which is more practical. Simulation result indicates that it is more effective than traditional signal system.

  • PDF

매크로 행동을 이용한 내시 Q-학습의 성능 향상 기법 (A Performance Improvement Technique for Nash Q-learning using Macro-Actions)

  • 성연식;조경은;엄기현
    • 한국멀티미디어학회논문지
    • /
    • 제11권3호
    • /
    • pp.353-363
    • /
    • 2008
  • 단일 에이전트 환경에서는 Q-학습의 학습 시간을 줄이기 위해서 학습결과를 전파시키거나 일렬의 행동을 패턴으로 만들어 학습한다. 다중 에이전트 환경에서는 동적인 환경과 다수의 에이전트 상태를 고려해야하기 때문에 학습에 필요한 시간이 단일 에이전트 환경보다 길어지게 된다. 이 논문에서는 단일 에이전트 환경에서 시간 단축을 위해서 유한개의 행동으로 정책을 만들어 학습하는 매크로 행동을 다중 에이전트 환경에 적합한 내시 Q-학습에 적용함으로써 다중 에이전트 환경에서 Q-학습 시간을 줄이고 성능을 높이는 방법을 제안한다. 실험에서는 다중 에이전트 환경에서 매크로 행동을 이용한 에이전트와 기본 행동만 이용한 에이전트의 내시 Q-학습 성능을 비교했다. 이 실험에서 네 개의 매크로 행동을 이용한 에이전트가 목표를 수행할 성공률이 기본 행동만 이용한 에이전트 보다 9.46% 높은 결과를 얻을 수 있었다. 매크로 행동은 기본 행동만을 이용해서 적합한 이동 행동을 찾아도 매크로 행동을 이용한 더 낳은 방법을 찾기 때문에 더 많은 Q-값의 변화가 발생되었고 전체 Q-값 합이 2.6배 높은 수치를 보였다. 마지막으로 매크로 행동을 이용한 에이전트는 약 절반의 행동 선택으로도 시작위치에서 목표위치까지 이동함을 보였다. 결국 에이전트는 다중 에이전트 환경에서 매크로 행동을 사용함으로써 성능을 향상시키고 목표위치까지 이동하는 거리를 단축해서 학습 속도를 향상시킨다.

  • PDF

Adapative Modular Q-Learning for Agents´ Dynamic Positioning in Robot Soccer Simulation

  • Kwon, Ki-Duk;Kim, In-Cheol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.149.5-149
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent´s dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless ...

  • PDF

다중 에이전트 강화학습을 이용한 RC보 최적설계 기술개발 (Development of Optimal Design Technique of RC Beam using Multi-Agent Reinforcement Learning)

  • 강주원;김현수
    • 한국공간구조학회논문집
    • /
    • 제23권2호
    • /
    • pp.29-36
    • /
    • 2023
  • Reinforcement learning (RL) is widely applied to various engineering fields. Especially, RL has shown successful performance for control problems, such as vehicles, robotics, and active structural control system. However, little research on application of RL to optimal structural design has conducted to date. In this study, the possibility of application of RL to structural design of reinforced concrete (RC) beam was investigated. The example of RC beam structural design problem introduced in previous study was used for comparative study. Deep q-network (DQN) is a famous RL algorithm presenting good performance in the discrete action space and thus it was used in this study. The action of DQN agent is required to represent design variables of RC beam. However, the number of design variables of RC beam is too many to represent by the action of conventional DQN. To solve this problem, multi-agent DQN was used in this study. For more effective reinforcement learning process, DDQN (Double Q-Learning) that is an advanced version of a conventional DQN was employed. The multi-agent of DDQN was trained for optimal structural design of RC beam to satisfy American Concrete Institute (318) without any hand-labeled dataset. Five agents of DDQN provides actions for beam with, beam depth, main rebar size, number of main rebar, and shear stirrup size, respectively. Five agents of DDQN were trained for 10,000 episodes and the performance of the multi-agent of DDQN was evaluated with 100 test design cases. This study shows that the multi-agent DDQN algorithm can provide successfully structural design results of RC beam.

강화 학습에 기초한 로봇 축구 에이전트의 설계 및 구현 (Design and implementation of Robot Soccer Agent Based on Reinforcement Learning)

  • 김인철
    • 정보처리학회논문지B
    • /
    • 제9B권2호
    • /
    • pp.139-146
    • /
    • 2002
  • 로봇 축구 시뮬레이션 게임은 하나의 동적 다중 에이전트 환경이다. 본 논문에서는 그러한 환경 하에서 각 에이전트의 동적 위치 결정을 위한 새로운 강화학습 방법을 제안한다. 강화학습은 한 에이전트가 환경으로부터 받는 간접적 지연 보상을 기초로 누적 보상값을 최대화할 수 있는 최적의 행동 전략을 학습하는 기계학습 방법이다. 따라서 강화학습은 입력-출력 쌍들이 훈련 예로 직접 제공되지 않는 다는 점에서 교사학습과 크게 다르다. 더욱이 Q-학습과 같은 비-모델 기반의 강화학습 알고리즘들은 주변 환경에 대한 어떤 모델도 학습하거나 미리 정의하는 것을 요구하지 않는다. 그럼에도 불구하고 이 알고리즘들은 에이전트가 모든 상태-행동 쌍들을 충분히 반복 경험할 수 있다면 최적의 행동전략에 수렴할 수 있다. 하지만 단순한 강화학습 방법들의 가장 큰 문제점은 너무 큰 상태 공간 때문에 보다 복잡한 환경들에 그대로 적용하기 어렵다는 것이다. 이런 문제점을 해결하기 위해 본 연구에서는 기존의 모듈화 Q-학습방법(MQL)을 개선한 적응적 중재에 기초한 모듈화 Q-학습 방법(AMMQL)을 제안한다. 종래의 단순한 모듈화 Q-학습 방법에서는 각 학습 모듈들의 결과를 결합하는 방식이 매우 단순하고 고정적이었으나 AMMQL학습 방법에서는 보상에 끼친 각 모듈의 기여도에 따라 모듈들에 서로 다른 가중치를 부여함으로써 보다 유연한 방식으로 각 모듈의 학습결과를 결합한다. 따라서 AMMQL 학습 방법은 큰 상태공간의 문제를 해결할 수 있을 뿐 아니라 동적인 환경변화에 보다 높은 적응성을 제공할 수 있다. 본 논문에서는 로봇 축구 에이전트의 동적 위치 결정을 위한 학습 방법으로 AMMQL 학습 방법을 사용하였고 이를 기초로 Cogitoniks 축구 에이전트 시스템을 구현하였다.

Reinforcement Learning Approach to Agents Dynamic Positioning in Robot Soccer Simulation Games

  • Kwon, Ki-Duk;Kim, In-Cheol
    • 한국시뮬레이션학회:학술대회논문집
    • /
    • 한국시뮬레이션학회 2001년도 The Seoul International Simulation Conference
    • /
    • pp.321-324
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement Beaming is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement loaming is different from supervised teaming in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement loaming algorithms like Q-learning do not require defining or loaming any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning(AMMQL) as an improvement of the existing Modular Q-Learning(MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

Research of Foresight Knowledge by CMAC based Q-learning in Inhomogeneous Multi-Agent System

  • Hoshino, Yukinobu;Sakakura, Akira;Kamei, Katsuari
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2003년도 ISIS 2003
    • /
    • pp.280-283
    • /
    • 2003
  • A purpose of our research is an acquisition of cooperative behaviors in inhomogeneous multi-agent system. In this research, we used the fire panic problem as an experiment environment. In Fire panic problem a fire exists in the environment, and follows in each steps of agent's behavior, and this fire spreads within the constant law. The purpose of the agent is to reach the goal established without touching the fire, which exists in the environment. The fire heat up by a few steps, which exists in the environment. The fire has unsureness to the agent. The agent has to avoid a fire, which is spreading in environment. The acquisition of the behavior to reach it to the goal is required. In this paper, we observe how agents escape from the fire cooperating with other agents. For this problem, we propose a unique CMAC based Q-learning system for inhomogeneous multi-agent system.

  • PDF

The Application of Industrial Inspection of LED

  • 왕숙;정길도
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2009년도 정보 및 제어 심포지움 논문집
    • /
    • pp.91-93
    • /
    • 2009
  • In this paper, we present the Q-learning method for adaptive traffic signal control on the basis of In this paper, we present the Q-learning method for adaptive traffic signal control on the basis of multi-agent technology. The structure is composed of sixphase agents and one intersection agent. Wireless communication network provides the possibility of the cooperation of agents. As one kind of reinforcement learning, Q-learning is adopted as the algorithm of the control mechanism, which can acquire optical control strategies from delayed reward; furthermore, we adopt dynamic learning method instead of static method, which is more practical. Simulation result indicates that it is more effective than traditional signal system.

  • PDF