• 제목/요약/키워드: Multi-Agent Learning

검색결과 112건 처리시간 0.025초

Multi-agent Q-learning based Admission Control Mechanism in Heterogeneous Wireless Networks for Multiple Services

  • Chen, Jiamei;Xu, Yubin;Ma, Lin;Wang, Yao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권10호
    • /
    • pp.2376-2394
    • /
    • 2013
  • In order to ensure both of the whole system capacity and users QoS requirements in heterogeneous wireless networks, admission control mechanism should be well designed. In this paper, Multi-agent Q-learning based Admission Control Mechanism (MQACM) is proposed to handle new and handoff call access problems appropriately. MQACM obtains the optimal decision policy by using an improved form of single-agent Q-learning method, Multi-agent Q-learning (MQ) method. MQ method is creatively introduced to solve the admission control problem in heterogeneous wireless networks in this paper. In addition, different priorities are allocated to multiple services aiming to make MQACM perform even well in congested network scenarios. It can be observed from both analysis and simulation results that our proposed method not only outperforms existing schemes with enhanced call blocking probability and handoff dropping probability performance, but also has better network universality and stability than other schemes.

다중 에이전트 강화학습을 이용한 RC보 최적설계 기술개발 (Development of Optimal Design Technique of RC Beam using Multi-Agent Reinforcement Learning)

  • 강주원;김현수
    • 한국공간구조학회논문집
    • /
    • 제23권2호
    • /
    • pp.29-36
    • /
    • 2023
  • Reinforcement learning (RL) is widely applied to various engineering fields. Especially, RL has shown successful performance for control problems, such as vehicles, robotics, and active structural control system. However, little research on application of RL to optimal structural design has conducted to date. In this study, the possibility of application of RL to structural design of reinforced concrete (RC) beam was investigated. The example of RC beam structural design problem introduced in previous study was used for comparative study. Deep q-network (DQN) is a famous RL algorithm presenting good performance in the discrete action space and thus it was used in this study. The action of DQN agent is required to represent design variables of RC beam. However, the number of design variables of RC beam is too many to represent by the action of conventional DQN. To solve this problem, multi-agent DQN was used in this study. For more effective reinforcement learning process, DDQN (Double Q-Learning) that is an advanced version of a conventional DQN was employed. The multi-agent of DDQN was trained for optimal structural design of RC beam to satisfy American Concrete Institute (318) without any hand-labeled dataset. Five agents of DDQN provides actions for beam with, beam depth, main rebar size, number of main rebar, and shear stirrup size, respectively. Five agents of DDQN were trained for 10,000 episodes and the performance of the multi-agent of DDQN was evaluated with 100 test design cases. This study shows that the multi-agent DDQN algorithm can provide successfully structural design results of RC beam.

Explicit Dynamic Coordination Reinforcement Learning Based on Utility

  • Si, Huaiwei;Tan, Guozhen;Yuan, Yifu;peng, Yanfei;Li, Jianping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권3호
    • /
    • pp.792-812
    • /
    • 2022
  • Multi-agent systems often need to achieve the goal of learning more effectively for a task through coordination. Although the introduction of deep learning has addressed the state space problems, multi-agent learning remains infeasible because of the joint action spaces. Large-scale joint action spaces can be sparse according to implicit or explicit coordination structure, which can ensure reasonable coordination action through the coordination structure. In general, the multi-agent system is dynamic, which makes the relations among agents and the coordination structure are dynamic. Therefore, the explicit coordination structure can better represent the coordinative relationship among agents and achieve better coordination between agents. Inspired by the maximization of social group utility, we dynamically construct a factor graph as an explicit coordination structure to express the coordinative relationship according to the utility among agents and estimate the joint action values based on the local utility transfer among factor graphs. We present the application of such techniques in the scenario of multiple intelligent vehicle systems, where state space and action space are a problem and have too many interactions among agents. The results on the multiple intelligent vehicle systems demonstrate the efficiency and effectiveness of our proposed methods.

Adapative Modular Q-Learning for Agents´ Dynamic Positioning in Robot Soccer Simulation

  • Kwon, Ki-Duk;Kim, In-Cheol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.149.5-149
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent´s dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless ...

  • PDF

Research of Foresight Knowledge by CMAC based Q-learning in Inhomogeneous Multi-Agent System

  • Hoshino, Yukinobu;Sakakura, Akira;Kamei, Katsuari
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2003년도 ISIS 2003
    • /
    • pp.280-283
    • /
    • 2003
  • A purpose of our research is an acquisition of cooperative behaviors in inhomogeneous multi-agent system. In this research, we used the fire panic problem as an experiment environment. In Fire panic problem a fire exists in the environment, and follows in each steps of agent's behavior, and this fire spreads within the constant law. The purpose of the agent is to reach the goal established without touching the fire, which exists in the environment. The fire heat up by a few steps, which exists in the environment. The fire has unsureness to the agent. The agent has to avoid a fire, which is spreading in environment. The acquisition of the behavior to reach it to the goal is required. In this paper, we observe how agents escape from the fire cooperating with other agents. For this problem, we propose a unique CMAC based Q-learning system for inhomogeneous multi-agent system.

  • PDF

강화학습을 이용한 다중 에이전트 제어 전략 (Multi-Agent Control Strategy using Reinforcement Leaning)

  • 이형일
    • 한국멀티미디어학회논문지
    • /
    • 제6권5호
    • /
    • pp.937-944
    • /
    • 2003
  • 다중 에이전트 시스템에서 가장 중요한 문제는 여러 에이전트가 서로 효율적인 협동(coordination)을 통해서 목표(goal)를 성취하는 것과 다른 에이전트들과의 충돌(collision) 을 방지하는 것이다. 본 논문에서는 먹이 추적 문제의 목표를 효율적으로 성취하기 위해 새로운 전략 방법을 제안한다. 제안된 제어 전략은 다중 에이전트를 제어하기 위해 강화 학습을 이용하였고, 에이전트들 간의 거리관계와 공간 관계를 고려하였다.

  • PDF

웹기반 협력 학습을 위한 멀티에이전트간의 통신에 관한 연구 (A Study of Communication between Multi-Agents for Web Based Collaborative Learning)

  • 이철환;한선관
    • 정보교육학회논문지
    • /
    • 제3권2호
    • /
    • pp.41-53
    • /
    • 2000
  • 본 연구는 웹 기반 협력학습을 위한 시스템에서 학습자의 학습을 돕기 위한 멀티에이전트간의 통신에 관한 연구이다. 우선, 에이전트 시스템에 대한 전반적인 내용과 특정을 고찰하였고 에이전트 상호간의 통신인 KQML에 대하여 살펴보았다. 또한 협력학습을 위한 에이전트 기반의 시스템 구조와 에이전트간의 상호 통신 방법을 제시하였다. Java 언어를 이용하여 협력학습 시스템을 설계 구현하였으며 실험을 통하여 에이전트간의 통신에 의한 협력학습 시스템의 효율성을 고찰하였다.

  • PDF

Q-learning for intersection traffic flow Control based on agents

  • 주선;정길도
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2009년도 정보 및 제어 심포지움 논문집
    • /
    • pp.94-96
    • /
    • 2009
  • In this paper, we present the Q-learning method for adaptive traffic signal control on the basis of multi-agent technology. The structure is composed of sixphase agents and one intersection agent. Wireless communication network provides the possibility of the cooperation of agents. As one kind of reinforcement learning, Q-learning is adopted as the algorithm of the control mechanism, which can acquire optical control strategies from delayed reward; furthermore, we adopt dynamic learning method instead of static method, which is more practical. Simulation result indicates that it is more effective than traditional signal system.

  • PDF

Opportunistic Spectrum Access with Discrete Feedback in Unknown and Dynamic Environment:A Multi-agent Learning Approach

  • Gao, Zhan;Chen, Junhong;Xu, Yuhua
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권10호
    • /
    • pp.3867-3886
    • /
    • 2015
  • This article investigates the problem of opportunistic spectrum access in dynamic environment, in which the signal-to-noise ratio (SNR) is time-varying. Different from existing work on continuous feedback, we consider more practical scenarios in which the transmitter receives an Acknowledgment (ACK) if the received SNR is larger than the required threshold, and otherwise a Non-Acknowledgment (NACK). That is, the feedback is discrete. Several applications with different threshold values are also considered in this work. The channel selection problem is formulated as a non-cooperative game, and subsequently it is proved to be a potential game, which has at least one pure strategy Nash equilibrium. Following this, a multi-agent Q-learning algorithm is proposed to converge to Nash equilibria of the game. Furthermore, opportunistic spectrum access with multiple discrete feedbacks is also investigated. Finally, the simulation results verify that the proposed multi-agent Q-learning algorithm is applicable to both situations with binary feedback and multiple discrete feedbacks.

유비쿼터스 웹 학습 환경을 위한 코스 스케줄링 멀티 에이전트 시스템 (A Course Scheduling Multi-Agent System For Ubiquitous Web Learning Environment)

  • 한승현;류동엽;서정만
    • 한국컴퓨터정보학회논문지
    • /
    • 제10권4호
    • /
    • pp.365-373
    • /
    • 2005
  • 유비쿼터스 환경을 위한 웹 기반 교육 시스템으로서 다양한 온라인 학습에 대한 새로운 교수 모형이 요구되고 있다. 또한, 학습자의 요구에 맞는 코스웨어의 주문이 증가되고 있는 추세이며 그에 따라 웹 기반 교육시스템에 효율적이고 자동화된 교육 에이전트의 필요성이 인식되고 있다. 그러나 현재 연구되고 있는 많은 교육 시스템들은 학습자 성향에 맞는 코스를 적절히 서비스해 주지 못할 뿐 아니라 지속적인 피드백과 학습자가 코스를 학습함에 있어서 취약한 부분을 재학습 할 수 있도록 도와주는 서비스를 원활히 제공하지 못하고 있다. 본 논문에서는 취약성 분석 알고리즘을 이용한 학습자 중심의 유비쿼터스 환경팩터를 통한 코스 스케줄링 멀티 에이전트 시스템을 제안한다. 제안한 시스템은 먼저 학습자의 학습 평가 결과를 분석하고 학습자의 학습 성취도를 계산하며, 이 성취도를 에이전트의 스케줄에 적용하여 학습자에게 적합한 코스를 제공하고, 학습자는 이러한 코스에 따라 능력에 맞는 반복된 학습을 통하여 적극적인 완전학습을 수행하게 된다.

  • PDF