• Title/Summary/Keyword: Actor-Critic

Search Result 47, Processing Time 0.029 seconds

Graph Neural Network and Reinforcement Learning based Optimal VNE Method in 5G and B5G Networks (5G 및 B5G 네트워크에서 그래프 신경망 및 강화학습 기반 최적의 VNE 기법)

  • Seok-Woo Park;Kang-Hyun Moon;Kyung-Taek Chung;In-Ho Ra
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.113-124
    • /
    • 2023
  • With the advent of 5G and B5G (Beyond 5G) networks, network virtualization technology that can overcome the limitations of existing networks is attracting attention. The purpose of network virtualization is to provide solutions for efficient network resource utilization and various services. Existing heuristic-based VNE (Virtual Network Embedding) techniques have been studied, but the flexibility is limited. Therefore, in this paper, we propose a GNN-based network slicing classification scheme to meet various service requirements and a RL-based VNE scheme for optimal resource allocation. The proposed method performs optimal VNE using an Actor-Critic network. Finally, to evaluate the performance of the proposed technique, we compare it with Node Rank, MCST-VNE, and GCN-VNE techniques. Through performance analysis, it was shown that the GNN and RL-based VNE techniques are better than the existing techniques in terms of acceptance rate and resource efficiency.

Robot Control via RPO-based Reinforcement Learning Algorithm (RPO 기반 강화학습 알고리즘을 이용한 로봇제어)

  • Kim, Jong-Ho;Kang, Dae-Sung;Park, Joo-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.4
    • /
    • pp.505-510
    • /
    • 2005
  • The RPO(randomized policy optimizer) algorithm, which utilizes probabilistic policy for the action selection, is a recently developed tool in the area of reinforcement learning, and has been shown to be very successful in several application problems. In this paper, we propose a modified RPO algorithm, whose critic network is adapted via RLS(Recursive Least Square) algorithm. In order to illustrate the applicability of the modified RPO method, we applied the modified algorithm to Kimura's robot and observed very good performance. We also developed a MATLAB-based animation program, by which the effectiveness of the training algorithms on the acceleration or the robot movement were observed.

Model-free $H_{\infty}$ Control of Linear Discrete-time Systems using Q-learning and LMI Based on I/O Data (입출력 데이터 기반 Q-학습과 LMI를 이용한 선형 이산 시간 시스템의 모델-프리 $H_{\infty}$ 제어기 설계)

  • Kim, Jin-Hoon;Lewis, F.L.
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.7
    • /
    • pp.1411-1417
    • /
    • 2009
  • In this paper, we consider the design of $H_{\infty}$ control of linear discrete-time systems having no mathematical model. The basic approach is to use Q-learning which is a reinforcement learning method based on actor-critic structure. The model-free control design is to use not the mathematical model of the system but the informations on states and inputs. As a result, the derived iterative algorithm is expressed as linear matrix inequalities(LMI) of measured data from system states and inputs. It is shown that, for a sufficiently rich enough disturbance, this algorithm converges to the standard $H_{\infty}$ control solution obtained using the exact system model. A simple numerical example is given to show the usefulness of our result on practical application.

Robot Skill Learning Strategy for Contact Task (접촉 작업을 위한 로봇의 스킬 학습 전략)

  • Kim, Byung-Chan;Kang, Byung-Duk;Park, Shin-Suk;Kang, Sung-Chul
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.2
    • /
    • pp.146-153
    • /
    • 2008
  • 본 논문에서는 인간 운동 제어 이론과 기계학습을 기반으로 하여 로봇의 접촉 작업 수행을 위한 새로운 운동 학습 전략을 제시하였다. 성공적인 접촉 작업 수행을 위한 본 연구의 전략은 강화학습 기법을 통하여 최적의 작업 수행을 위한 임피던스 매개 변수를 찾는 것이다. 본 연구에서는 최적의 임피던스 매개 변수를 결정하기 위하여 Recursive Least-Square (RLS) 필터 기반 episodic Natural Actor-Critic 알고리즘이 적용되었다. 본 논문에서는 제안한 전략의 효용성을 증명하기 위해 동역학 시뮬레이션을 수행하였고, 그 결과를 통하여 접촉작업에서의 작업 최적화 및 환경이 가지는 불확실성에 대한 적응성을 보여 주었다.

  • PDF

Experimental Analysis of A3C and PPO in the OpenAI Gym Environment (OpenAI Gym 환경에서 A3C와 PPO의 실험적 분석)

  • Hwang, Gyu-Young;Lim, Hyun-Kyo;Heo, Joo-Seong;Han, Youn-Hee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.545-547
    • /
    • 2019
  • Policy Gradient 방식의 학습은 최근 강화학습 분야에서 많이 연구되고 있는 주제로, 본 논문에서는 강화학습을 적용시킬 수 있는 OpenAi Gym 의 'CartPole-v0' 와 'Pendulum-v0' 환경에서 Policy Gradient 방식의 Asynchronous Advantage Actor-Critic (A3C) 알고리즘과 Proximal Policy Optimization (PPO) 알고리즘의 학습 성능을 비교 분석한 결과를 제시한다. 딥러닝 모델 등 두 알고리즘이 동일하게 지닐 수 있는 조건들은 가능한 동일하게 맞추면서 Episode 진행에 따른 Score 변화 과정을 실험하였다. 본 실험을 통해서 두 가지 서로 다른 환경에서 PPO 가 A3C 보다 더 나은 성능을 보임을 확인하였다.

On the Reward Function of Latent SAC Reinforcement Learning to Improve Longitudinal Driving Performance (종방향 주행성능향상을 위한 Latent SAC 강화학습 보상함수 설계)

  • Jo, Sung-Bean;Jeong, Han-You
    • Journal of IKEEE
    • /
    • v.25 no.4
    • /
    • pp.728-734
    • /
    • 2021
  • In recent years, there has been a strong interest in the end-to-end autonomous driving based on deep reinforcement learning. In this paper, we present a reward function of latent SAC deep reinforcement learning to improve the longitudinal driving performance of an agent vehicle. While the existing reward function significantly degrades the driving safety and efficiency, the proposed reward function is shown to maintain an appropriate headway distance while avoiding the front vehicle collision.

Blockchain Based Financial Portfolio Management Using A3C (A3C를 활용한 블록체인 기반 금융 자산 포트폴리오 관리)

  • Kim, Ju-Bong;Heo, Joo-Seong;Lim, Hyun-Kyo;Kwon, Do-Hyung;Han, Youn-Hee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.8 no.1
    • /
    • pp.17-28
    • /
    • 2019
  • In the financial investment management strategy, the distributed investment selecting and combining various financial assets is called portfolio management theory. In recent years, the blockchain based financial assets, such as cryptocurrencies, have been traded on several well-known exchanges, and an efficient portfolio management approach is required in order for investors to steadily raise their return on investment in cryptocurrencies. On the other hand, deep learning has shown remarkable results in various fields, and research on application of deep reinforcement learning algorithm to portfolio management has begun. In this paper, we propose an efficient financial portfolio investment management method based on Asynchronous Advantage Actor-Critic (A3C), which is a representative asynchronous reinforcement learning algorithm. In addition, since the conventional cross-entropy function can not be applied to portfolio management, we propose a proper method where the existing cross-entropy is modified to fit the portfolio investment method. Finally, we compare the proposed A3C model with the existing reinforcement learning based cryptography portfolio investment algorithm, and prove that the performance of the proposed A3C model is better than the existing one.

Performance Comparison of Reinforcement Learning Algorithms for Futures Scalping (해외선물 스캘핑을 위한 강화학습 알고리즘의 성능비교)

  • Jung, Deuk-Kyo;Lee, Se-Hun;Kang, Jae-Mo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.5
    • /
    • pp.697-703
    • /
    • 2022
  • Due to the recent economic downturn caused by Covid-19 and the unstable international situation, many investors are choosing the derivatives market as a means of investment. However, the derivatives market has a greater risk than the stock market, and research on the market of market participants is insufficient. Recently, with the development of artificial intelligence, machine learning has been widely used in the derivatives market. In this paper, reinforcement learning, one of the machine learning techniques, is applied to analyze the scalping technique that trades futures in minutes. The data set consists of 21 attributes using the closing price, moving average line, and Bollinger band indicators of 1 minute and 3 minute data for 6 months by selecting 4 products among futures products traded at trading firm. In the experiment, DNN artificial neural network model and three reinforcement learning algorithms, namely, DQN (Deep Q-Network), A2C (Advantage Actor Critic), and A3C (Asynchronous A2C) were used, and they were trained and verified through learning data set and test data set. For scalping, the agent chooses one of the actions of buying and selling, and the ratio of the portfolio value according to the action result is rewarded. Experiment results show that the energy sector products such as Heating Oil and Crude Oil yield relatively high cumulative returns compared to the index sector products such as Mini Russell 2000 and Hang Seng Index.

Time-varying Proportional Navigation Guidance using Deep Reinforcement Learning (심층 강화학습을 이용한 시변 비례 항법 유도 기법)

  • Chae, Hyeok-Joo;Lee, Daniel;Park, Su-Jeong;Choi, Han-Lim;Park, Han-Sol;An, Kyeong-Soo
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.23 no.4
    • /
    • pp.399-406
    • /
    • 2020
  • In this paper, we propose a time-varying proportional navigation guidance law that determines the proportional navigation gain in real-time according to the operating situation. When intercepting a target, an unidentified evasion strategy causes a loss of optimality. To compensate for this problem, proper proportional navigation gain is derived at every time step by solving an optimal control problem with the inferred evader's strategy. Recently, deep reinforcement learning algorithms are introduced to deal with complex optimal control problem efficiently. We adapt the actor-critic method to build a proportional navigation gain network and the network is trained by the Proximal Policy Optimization(PPO) algorithm to learn an evasion strategy of the target. Numerical experiments show the effectiveness and optimality of the proposed method.

Reinforcement Learning based on Deep Deterministic Policy Gradient for Roll Control of Underwater Vehicle (수중운동체의 롤 제어를 위한 Deep Deterministic Policy Gradient 기반 강화학습)

  • Kim, Su Yong;Hwang, Yeon Geol;Moon, Sung Woong
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.24 no.5
    • /
    • pp.558-568
    • /
    • 2021
  • The existing underwater vehicle controller design is applied by linearizing the nonlinear dynamics model to a specific motion section. Since the linear controller has unstable control performance in a transient state, various studies have been conducted to overcome this problem. Recently, there have been studies to improve the control performance in the transient state by using reinforcement learning. Reinforcement learning can be largely divided into value-based reinforcement learning and policy-based reinforcement learning. In this paper, we propose the roll controller of underwater vehicle based on Deep Deterministic Policy Gradient(DDPG) that learns the control policy and can show stable control performance in various situations and environments. The performance of the proposed DDPG based roll controller was verified through simulation and compared with the existing PID and DQN with Normalized Advantage Functions based roll controllers.