• Title/Summary/Keyword: 멀티 에이전트 강화 학습

Search Result 31, Processing Time 0.025 seconds

Cooperative Multi-agent Reinforcement Learning on Sparse Reward Battlefield Environment using QMIX and RND in Ray RLlib

  • Minkyoung Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.11-19
    • /
    • 2024
  • Multi-agent systems can be utilized in various real-world cooperative environments such as battlefield engagements and unmanned transport vehicles. In the context of battlefield engagements, where dense reward design faces challenges due to limited domain knowledge, it is crucial to consider situations that are learned through explicit sparse rewards. This paper explores the collaborative potential among allied agents in a battlefield scenario. Utilizing the Multi-Robot Warehouse Environment(RWARE) as a sparse reward environment, we define analogous problems and establish evaluation criteria. Constructing a learning environment with the QMIX algorithm from the reinforcement learning library Ray RLlib, we enhance the Agent Network of QMIX and integrate Random Network Distillation(RND). This enables the extraction of patterns and temporal features from partial observations of agents, confirming the potential for improving the acquisition of sparse reward experiences through intrinsic rewards.

Research Trends on Deep Reinforcement Learning (심층 강화학습 기술 동향)

  • Jang, S.Y.;Yoon, H.J.;Park, N.S.;Yun, J.K.;Son, Y.S.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.4
    • /
    • pp.1-14
    • /
    • 2019
  • Recent trends in deep reinforcement learning (DRL) have revealed the considerable improvements to DRL algorithms in terms of performance, learning stability, and computational efficiency. DRL also enables the scenarios that it covers (e.g., partial observability; cooperation, competition, coexistence, and communications among multiple agents; multi-task; decentralized intelligence) to be vastly expanded. These features have cultivated multi-agent reinforcement learning research. DRL is also expanding its applications from robotics to natural language processing and computer vision into a wide array of fields such as finance, healthcare, chemistry, and even art. In this report, we briefly summarize various DRL techniques and research directions.

Study for Feature Selection Based on Multi-Agent Reinforcement Learning (다중 에이전트 강화학습 기반 특징 선택에 대한 연구)

  • Kim, Miin-Woo;Bae, Jin-Hee;Wang, Bo-Hyun;Lim, Joon-Shik
    • Journal of Digital Convergence
    • /
    • v.19 no.12
    • /
    • pp.347-352
    • /
    • 2021
  • In this paper, we propose a method for finding feature subsets that are effective for classification in an input dataset by using a multi-agent reinforcement learning method. In the field of machine learning, it is crucial to find features suitable for classification. A dataset may have numerous features; while some features may be effective for classification or prediction, others may have little or rather negative effects on results. In machine learning problems, feature selection for increasing classification or prediction accuracy is a critical problem. To solve this problem, we proposed a feature selection method based on reinforced learning. Each feature has one agent, which determines whether the feature is selected. After obtaining corresponding rewards for each feature that is selected, but not by the agents, the Q-value of each agent is updated by comparing the rewards. The reward comparison of the two subsets helps agents determine whether their actions were right. These processes are performed as many times as the number of episodes, and finally, features are selected. As a result of applying this method to the Wisconsin Breast Cancer, Spambase, Musk, and Colon Cancer datasets, accuracy improvements of 0.0385, 0.0904, 0.1252 and 0.2055 were shown, respectively, and finally, classification accuracies of 0.9789, 0.9311, 0.9691 and 0.9474 were achieved, respectively. It was proved that our proposed method could properly select features that were effective for classification and increase classification accuracy.

Card Battle Game Agent Based on Reinforcement Learning with Play Level Control (플레이 수준 조절이 가능한 강화학습 기반 카드형 대전 게임 에이전트)

  • Yong Cheol Lee;Chill woo Lee
    • Smart Media Journal
    • /
    • v.13 no.2
    • /
    • pp.32-43
    • /
    • 2024
  • Game agents which are behavioral agent for game playing are a crucial component of game satisfaction. However it takes a lot of time and effort to create game agents for various game levels, environments, and players. In addition, when the game environment changes such as adding contents or updating characters, new game agents need to be developed and the development difficulty gradually increases. And it is important to have a game agent that can be customized for different levels of players. This is because a game agent that can play games of various levels is more useful and can increase the satisfaction of more players than a high-level game agent. In this paper, we propose a method for learning and controlling the level of play of game agents that can be rapidly developed and fine-tuned for various game environments and changes. At this time, reinforcement learning applies a policy-based distributed reinforcement learning method IMPALA for flexible processing and fast learning of various behavioral structures. Once reinforcement learning is complete, we choose actions by sampling based on Softmax-Temperature method. From this result, we show that the game agent's play level decreases as the Temperature value increases. This shows that it is possible to easily control the play level.

Research Trends of Multi-agent Collaboration Technology for Artificial Intelligence Bots (AI Bots를 위한 멀티에이전트 협업 기술 동향)

  • D., Kang;J.Y., Jung;C.H., Lee;M., Park;J.W., Lee;Y.J., Lee
    • Electronics and Telecommunications Trends
    • /
    • v.37 no.6
    • /
    • pp.32-42
    • /
    • 2022
  • Recently, decentralized approaches to artificial intelligence (AI) development, such as federated learning are drawing attention as AI development's cost and time inefficiency increase due to explosive data growth and rapid environmental changes. Collaborative AI technology that dynamically organizes collaborative groups between different agents to share data, knowledge, and experience and uses distributed resources to derive enhanced knowledge and analysis models through collaborative learning to solve given problems is an alternative to centralized AI. This article investigates and analyzes recent technologies and applications applicable to the research of multi-agent collaboration of AI bots, which can provide collaborative AI functionality autonomously.

Fast Navigation in Dynamic 3D Game Environment Using Reinforcement Learning (강화 학습을 사용한 동적 게임 환경에서의 빠른 경로 탐색)

  • Yi, Seung-Joon;Zhang, Byoung-Tak
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.703-705
    • /
    • 2005
  • 연속적이고 동적인 실세계에서의 경로 탐색 문제는 이동 로봇 분야에서 주된 문제 중 하나였다. 최근 컴퓨터 성능이 크게 발전하면서 컴퓨터 게임들이 실제에 가까운 연속적인 3차원 환경 모델을 사용하기 시작하였고, 그에 따라 보다 복잡하고 동적인 환경 모델 하에서 경로 탐색을 할 수 있는 능력이 요구되고 있다. 강화 학습 기반의 경로 탐색 알고리즘인 평가치 반복(Value iteration) 알고리즘은 실시간 멀티에이전트 환경에 적합한 여러 장점들을 가지고 있으나, 문제가 커질수록 속도가 크게 느려진다는 단점을 가지고 있다. 본 논문에서는 연속적인 3차원 상황에서 빠르게 동적 변화에 적응할 수 있도록 하기 위하여 작은 세상 네트웍 모델을 사용한 환경 모델 및 경로 탐색 알고리즘을 제안한다. 3차원 게임 환경에서의 실험을 통해 제안된 알고리즘이 연속적이고 복잡한 실시간 환경 하에서 우수한 경로를 찾아낼 수 있으며, 환경의 변화가 관측될 경우 이에 빠르게 적응할 수 있음을 확인할 수 있었다.

  • PDF

A Routing Algorithm based on Deep Reinforcement Learning in SDN (SDN에서 심층강화학습 기반 라우팅 알고리즘)

  • Lee, Sung-Keun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.6
    • /
    • pp.1153-1160
    • /
    • 2021
  • This paper proposes a routing algorithm that determines the optimal path using deep reinforcement learning in software-defined networks. The deep reinforcement learning model for learning is based on DQN, the inputs are the current network state, source, and destination nodes, and the output returns a list of routes from source to destination. The routing task is defined as a discrete control problem, and the quality of service parameters for routing consider delay, bandwidth, and loss rate. The routing agent classifies the appropriate service class according to the user's quality of service profile, and converts the service class that can be provided for each link from the current network state collected from the SDN. Based on this converted information, it learns to select a route that satisfies the required service level from the source to the destination. The simulation results indicated that if the proposed algorithm proceeds with a certain episode, the correct path is selected and the learning is successfully performed.

Exploring the Effectiveness of GAN-based Approach and Reinforcement Learning in Character Boxing Task (캐릭터 복싱 과제에서 GAN 기반 접근법과 강화학습의 효과성 탐구)

  • Seoyoung Son;Taesoo Kwon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.4
    • /
    • pp.7-16
    • /
    • 2023
  • For decades, creating a desired locomotive motion in a goal-oriented manner has been a challenge in character animation. Data-driven methods using generative models have demonstrated efficient ways of predicting long sequences of motions without the need for explicit conditioning. While these methods produce high-quality long-term motions, they can be limited when it comes to synthesizing motion for challenging novel scenarios, such as punching a random target. A state-of-the-art solution to overcome this limitation is by using a GAN Discriminator to imitate motion data clips and incorporating reinforcement learning to compose goal-oriented motions. In this paper, our research aims to create characters performing combat sports such as boxing, using a novel reward design in conjunction with existing GAN-based approaches. We experimentally demonstrate that both the Adversarial Motion Prior [3] and Adversarial Skill Embeddings [4] methods are capable of generating viable motions for a character punching a random target, even in the absence of mocap data that specifically captures the transition between punching and locomotion. Also, with a single learned policy, multiple task controllers can be constructed through the TimeChamber framework.

Multi-Object Goal Visual Navigation Based on Multimodal Context Fusion (멀티모달 맥락정보 융합에 기초한 다중 물체 목표 시각적 탐색 이동)

  • Jeong Hyun Choi;In Cheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.9
    • /
    • pp.407-418
    • /
    • 2023
  • The Multi-Object Goal Visual Navigation(MultiOn) is a visual navigation task in which an agent must visit to multiple object goals in an unknown indoor environment in a given order. Existing models for the MultiOn task suffer from the limitation that they cannot utilize an integrated view of multimodal context because use only a unimodal context map. To overcome this limitation, in this paper, we propose a novel deep neural network-based agent model for MultiOn task. The proposed model, MCFMO, uses a multimodal context map, containing visual appearance features, semantic features of environmental objects, and goal object features. Moreover, the proposed model effectively fuses these three heterogeneous features into a global multimodal context map by using a point-wise convolutional neural network module. Lastly, the proposed model adopts an auxiliary task learning module to predict the observation status, goal direction and the goal distance, which can guide to learn the navigational policy efficiently. Conducting various quantitative and qualitative experiments using the Habitat-Matterport3D simulation environment and scene dataset, we demonstrate the superiority of the proposed model.