• Title/Summary/Keyword: Policy Optimization

Search Result 303, Processing Time 0.025 seconds

Multi-Agent Deep Reinforcement Learning for Fighting Game: A Comparative Study of PPO and A2C

  • Yoshua Kaleb Purwanto;Dae-Ki Kang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.192-198
    • /
    • 2024
  • This paper investigates the application of multi-agent deep reinforcement learning in the fighting game Samurai Shodown using Proximal Policy Optimization (PPO) and Advantage Actor-Critic (A2C) algorithms. Initially, agents are trained separately for 200,000 timesteps using Convolutional Neural Network (CNN) and Multi-Layer Perceptron (MLP) with LSTM networks. PPO demonstrates superior performance early on with stable policy updates, while A2C shows better adaptation and higher rewards over extended training periods, culminating in A2C outperforming PPO after 1,000,000 timesteps. These findings highlight PPO's effectiveness for short-term training and A2C's advantages in long-term learning scenarios, emphasizing the importance of algorithm selection based on training duration and task complexity. The code can be found in this link https://github.com/Lexer04/Samurai-Shodown-with-Reinforcement-Learning-PPO.

Optimization of Air Quality Monitoring Networks in Busan Using a GIS-based Decision Support System (GIS기반 의사결정지원시스템을 이용한 부산 대기질 측정망의 최적화)

  • Yoo, Eun-Chul;Park, Ok-Hyun
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.23 no.5
    • /
    • pp.526-538
    • /
    • 2007
  • Since air quality monitoring data sets are important base for developing of air quality management strategies including policy making and policy performance assessment, the environmental protection authorities need to organize and operate monitoring network properly. Air quality monitoring network of Busan, consisting of 18 stations, was allocated under unscientific and irrational principles. Thus the current state of air quality monitoring networks was reassessed the effect and appropriateness of monitoring objectives such as population protection and sources surveillance. In the process of the reassessment, a GIS-based decision support system was constructed and used to simulate air quality over complex terrain and to conduct optimization analysis for air quality monitoring network with multi-objective. The maximization of protection capability for population appears to be the most effective and principal objective among various objectives. The relocation of current monitoring stations through optimization analysis of multi-objective appears to be better than the network building for maximization of population protection capability. The decision support system developed in this study on the basis of GIS-based database appear to be useful for the environmental protection authorities to plan and manage air quality monitoring network over complex terrain.

Word Processor font optimization in Fixed-function cell Using a Genetic Algorithm (유전자 알고리즘을 이용한 고정 셀에서 글자 폰트(font) 최적화)

  • Kim, Sang-Won;Kim, Seung-Hee;Kim, Woo-Je
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.10
    • /
    • pp.163-172
    • /
    • 2013
  • This study was conducted to explore a method of displaying optimized letters that fit the size of tables using a genetic algorithm. As a result, fonts with optimized letters of different lengths were offered through optimum values of the font size, line spacing, and letter spacing by calculating the width and height of the cell and number of letters to be entered. This study is significant in that it provides a solution to letter optimization issues in fixed cells that occur in various word processors that are currently used, through the genetic algorithm.

Improvements of pursuit performance using episodic parameter optimization in probabilistic games (에피소드 매개변수 최적화를 이용한 확률게임에서의 추적정책 성능 향상)

  • Kwak, Dong-Jun;Kim, H.-Jin
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.40 no.3
    • /
    • pp.215-221
    • /
    • 2012
  • In this paper, we introduce an optimization method to improve pursuit performance of a pursuer in a pursuit-evasion game (PEG). Pursuers build a probability map and employ a hybrid pursuit policy which combines the merits of local-max and global-max pursuit policies to search and capture evaders as soon as possible in a 2-dimensional space. We propose an episodic parameter optimization (EPO) algorithm to learn good values for the weighting parameters of a hybrid pursuit policy. The EPO algorithm is performed while many episodes of the PEG are run repeatedly and the reward of each episode is accumulated using reinforcement learning, and the candidate weighting parameter is selected in a way that maximizes the total averaged reward by using the golden section search method. We found the best pursuit policy in various situations which are the different number of evaders and the different size of spaces and analyzed results.

Some Recent Results of Approximation Algorithms for Markov Games and their Applications

  • 장형수
    • Proceedings of the Korean Society of Computational and Applied Mathematics Conference
    • /
    • 2003.09a
    • /
    • pp.15-15
    • /
    • 2003
  • We provide some recent results of approximation algorithms for solving Markov Games and discuss their applications to problems that arise in Computer Science. We consider a receding horizon approach as an approximate solution to two-person zero-sum Markov games with an infinite horizon discounted cost criterion. We present error bounds from the optimal equilibrium value of the game when both players take “correlated” receding horizon policies that are based on exact or approximate solutions of receding finite horizon subgames. Motivated by the worst-case optimal control of queueing systems by Altman, we then analyze error bounds when the minimizer plays the (approximate) receding horizon control and the maximizer plays the worst case policy. We give two heuristic examples of the approximate receding horizon control. We extend “parallel rollout” and “hindsight optimization” into the Markov game setting within the framework of the approximate receding horizon approach and analyze their performances. From the parallel rollout approach, the minimizing player seeks to combine dynamically multiple heuristic policies in a set to improve the performances of all of the heuristic policies simultaneously under the guess that the maximizing player has chosen a fixed worst-case policy. Given $\varepsilon$>0, we give the value of the receding horizon which guarantees that the parallel rollout policy with the horizon played by the minimizer “dominates” any heuristic policy in the set by $\varepsilon$, From the hindsight optimization approach, the minimizing player makes a decision based on his expected optimal hindsight performance over a finite horizon. We finally discuss practical implementations of the receding horizon approaches via simulation and applications.

  • PDF

Design of track path-finding simulation using Unity ML Agents

  • In-Chul Han;Jin-Woong Kim;Soo Kyun Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.2
    • /
    • pp.61-66
    • /
    • 2024
  • This paper aims to design a simulation for path-finding of objects in a simulation or game environment using reinforcement learning techniques. The main feature of this study is that the objects in the simulation are trained to avoid obstacles at random locations generated on a given track and to automatically explore path to get items. To implement the simulation, ML Agents provided by Unity Game Engine were used, and a learning policy based on PPO (Proximal Policy Optimization) was established to form a reinforcement learning environment. Through the reinforcement learning-based simulation designed in this study, we were able to confirm that the object moves on the track by avoiding obstacles and exploring path to acquire items as it learns, by analyzing the simulation results and learning result graph.

Proxy-based Caching Optimization for Mobile Ad Hoc Streaming Services (모바일 애드 혹 스트리밍 서비스를 위한 프록시 기반 캐싱 최적화)

  • Lee, Chong-Deuk
    • Journal of Digital Convergence
    • /
    • v.10 no.4
    • /
    • pp.207-215
    • /
    • 2012
  • This paper proposes a proxy-based caching optimization scheme for improving the streaming media services in wireless mobile ad hoc networks. The proposed scheme utilizes the proxy for data packet transmission between media server and nodes in WLANs, and the proxy locates near the wireless access pointer. For caching optimization, this paper proposes NFCO (non-full cache optimization) and CFO (cache full optimization) scheme. When performs the streaming in the proxy, the NFCO and CFO is to optimize the caching performance. This paper compared the performance for optimization between the proposed scheme and the server-based scheme and rate-distortion scheme. Simulation results show that the proposed scheme has better performance than the existing server-only scheme and rate distortion scheme.

POISSON ARRIVAL QUEUE WITH ALTERNATING SERVICE RATES

  • KIM JONGWOO;LEE EUI YONG;LEE HO WOO
    • Journal of the Korean Statistical Society
    • /
    • v.34 no.1
    • /
    • pp.39-47
    • /
    • 2005
  • We adopt the P/sub λ, T//sup M/ policy of dam to introduce a service policy with alternating service rates for a Poisson arrival queue, in which the service rate alternates depending on the number of customers in the system. The stationary distribution of the number of customers in the system is derived and, after operating costs being assigned to the system, the optimization of the policy is studied.

Multiresponse Optimization: A Literature Review and Research Opportunities (다중반응표면최적화 : 현황 및 향후 연구방향)

  • Jeong, In-Jun
    • Journal of Korean Society for Quality Management
    • /
    • v.39 no.3
    • /
    • pp.377-390
    • /
    • 2011
  • A common problem encountered in product or process design is the selection of optimal parameter levels which involves simultaneous consideration of multiple response variables. This is called a multiresponse problem. A multiresponse problem is solved through three major stages: data collection, model building, and optimization. Up to date, various methods have been proposed for the optimization, including the desirability function approach and loss function approach. In this paper, the existing studies in multiresponse optimization are reviewed and a future research direction is then proposed.

A Quantitative Model for a Supply Chain Design

  • Cho, Geon;Ryu, Il;Lee, Kyoung-Jae;Park, Yi-Sook;Jung, Kyung-Ho;Kim, Do-Goan
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.311-314
    • /
    • 2005
  • Supply chain optimization is one of the most important components in the optimization of a company's value chain. This paper considers the problem of designing the supply chain for a product that is represented as an assembly bill of material (BOM). In this problem we are required to identify the locations at which different components of the product arc are produced/assembled. The objective is to minimize the overall cost, which comprises production, inventory holding and transportation costs. We assume that production locations are known and that the inventory policy is a base stock policy. We first formulate the problem as a 0-1 nonlinear integer programming model and show that it can be reformulated as a 0-1 linear integer programming model with an exponential number of decision variables.

  • PDF