• Title/Summary/Keyword: Deep-Q-Network

Search Result 63, Processing Time 0.031 seconds

Improved Deep Q-Network Algorithm Using Self-Imitation Learning (Self-Imitation Learning을 이용한 개선된 Deep Q-Network 알고리즘)

  • Sunwoo, Yung-Min;Lee, Won-Chang
    • Journal of IKEEE
    • /
    • v.25 no.4
    • /
    • pp.644-649
    • /
    • 2021
  • Self-Imitation Learning is a simple off-policy actor-critic algorithm that makes an agent find an optimal policy by using past good experiences. In case that Self-Imitation Learning is combined with reinforcement learning algorithms that have actor-critic architecture, it shows performance improvement in various game environments. However, its applications are limited to reinforcement learning algorithms that have actor-critic architecture. In this paper, we propose a method of applying Self-Imitation Learning to Deep Q-Network which is a value-based deep reinforcement learning algorithm and train it in various game environments. We also show that Self-Imitation Learning can be applied to Deep Q-Network to improve the performance of Deep Q-Network by comparing the proposed algorithm and ordinary Deep Q-Network training results.

Visual Analysis of Deep Q-network

  • Seng, Dewen;Zhang, Jiaming;Shi, Xiaoying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.3
    • /
    • pp.853-873
    • /
    • 2021
  • In recent years, deep reinforcement learning (DRL) models are enjoying great interest as their success in a variety of challenging tasks. Deep Q-Network (DQN) is a widely used deep reinforcement learning model, which trains an intelligent agent that executes optimal actions while interacting with an environment. This model is well known for its ability to surpass skilled human players across many Atari 2600 games. Although DQN has achieved excellent performance in practice, there lacks a clear understanding of why the model works. In this paper, we present a visual analytics system for understanding deep Q-network in a non-blind matter. Based on the stored data generated from the training and testing process, four coordinated views are designed to expose the internal execution mechanism of DQN from different perspectives. We report the system performance and demonstrate its effectiveness through two case studies. By using our system, users can learn the relationship between states and Q-values, the function of convolutional layers, the strategies learned by DQN and the rationality of decisions made by the agent.

Deep Q-Network based Game Agents (심층 큐 신경망을 이용한 게임 에이전트 구현)

  • Han, Dongki;Kim, Myeongseop;Kim, Jaeyoun;Kim, Jung-Su
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.3
    • /
    • pp.157-162
    • /
    • 2019
  • The video game Tetris is one of most popular game and it is well known that its game rule can be modelled as MDP (Markov Decision Process). This paper presents a DQN (Deep Q-Network) based game agent for Tetris game. To this end, the state is defined as the captured image of the Tetris game board and the reward is designed as a function of cleared lines by the game agent. The action is defined as left, right, rotate, drop, and their finite number of combinations. In addition to this, PER (Prioritized Experience Replay) is employed in order to enhance learning performance. To train the network more than 500000 episodes are used. The game agent employs the trained network to make a decision. The performance of the developed algorithm is validated via not only simulation but also real Tetris robot agent which is made of a camera, two Arduinos, 4 servo motors, and artificial fingers by 3D printing.

Applying Deep Reinforcement Learning to Improve Throughput and Reduce Collision Rate in IEEE 802.11 Networks

  • Ke, Chih-Heng;Astuti, Lia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.1
    • /
    • pp.334-349
    • /
    • 2022
  • The effectiveness of Wi-Fi networks is greatly influenced by the optimization of contention window (CW) parameters. Unfortunately, the conventional approach employed by IEEE 802.11 wireless networks is not scalable enough to sustain consistent performance for the increasing number of stations. Yet, it is still the default when accessing channels for single-users of 802.11 transmissions. Recently, there has been a spike in attempts to enhance network performance using a machine learning (ML) technique known as reinforcement learning (RL). Its advantage is interacting with the surrounding environment and making decisions based on its own experience. Deep RL (DRL) uses deep neural networks (DNN) to deal with more complex environments (such as continuous state spaces or actions spaces) and to get optimum rewards. As a result, we present a new approach of CW control mechanism, which is termed as contention window threshold (CWThreshold). It uses the DRL principle to define the threshold value and learn optimal settings under various network scenarios. We demonstrate our proposed method, known as a smart exponential-threshold-linear backoff algorithm with a deep Q-learning network (SETL-DQN). The simulation results show that our proposed SETL-DQN algorithm can effectively improve the throughput and reduce the collision rates.

A Research on Low-power Buffer Management Algorithm based on Deep Q-Learning approach for IoT Networks (IoT 네트워크에서의 심층 강화학습 기반 저전력 버퍼 관리 기법에 관한 연구)

  • Song, Taewon
    • Journal of Internet of Things and Convergence
    • /
    • v.8 no.4
    • /
    • pp.1-7
    • /
    • 2022
  • As the number of IoT devices increases, power management of the cluster head, which acts as a gateway between the cluster and sink nodes in the IoT network, becomes crucial. Particularly when the cluster head is a mobile wireless terminal, the power consumption of the IoT network must be minimized over its lifetime. In addition, the delay of information transmission in the IoT network is one of the primary metrics for rapid information collecting in the IoT network. In this paper, we propose a low-power buffer management algorithm that takes into account the information transmission delay in an IoT network. By forwarding or skipping received packets utilizing deep Q learning employed in deep reinforcement learning methods, the suggested method is able to reduce power consumption while decreasing transmission delay level. The proposed approach is demonstrated to reduce power consumption and to improve delay relative to the existing buffer management technique used as a comparison in slotted ALOHA protocol.

A study on Deep Q-Networks based Auto-scaling in NFV Environment (NFV 환경에서의 Deep Q-Networks 기반 오토 스케일링 기술 연구)

  • Lee, Do-Young;Yoo, Jae-Hyoung;Hong, James Won-Ki
    • KNOM Review
    • /
    • v.23 no.2
    • /
    • pp.1-10
    • /
    • 2020
  • Network Function Virtualization (NFV) is a key technology of 5G networks that has the advantage of enabling building and operating networks flexibly. However, NFV can complicate network management because it creates numerous virtual resources that should be managed. In NFV environments, service function chaining (SFC) composed of virtual network functions (VNFs) is widely used to apply a series of network functions to traffic. Therefore, it is required to dynamically allocate the right amount of computing resources or instances to SFC for meeting service requirements. In this paper, we propose Deep Q-Networks (DQN)-based auto-scaling to operate the appropriate number of VNF instances in SFC. The proposed approach not only resizes the number of VNF instances in SFC composed of multi-tier architecture but also selects a tier to be scaled in response to dynamic traffic forwarding through SFC.

Application of Deep Recurrent Q Network with Dueling Architecture for Optimal Sepsis Treatment Policy

  • Do, Thanh-Cong;Yang, Hyung Jeong;Ho, Ngoc-Huynh
    • Smart Media Journal
    • /
    • v.10 no.2
    • /
    • pp.48-54
    • /
    • 2021
  • Sepsis is one of the leading causes of mortality globally, and it costs billions of dollars annually. However, treating septic patients is currently highly challenging, and more research is needed into a general treatment method for sepsis. Therefore, in this work, we propose a reinforcement learning method for learning the optimal treatment strategies for septic patients. We model the patient physiological time series data as the input for a deep recurrent Q-network that learns reliable treatment policies. We evaluate our model using an off-policy evaluation method, and the experimental results indicate that it outperforms the physicians' policy, reducing patient mortality up to 3.04%. Thus, our model can be used as a tool to reduce patient mortality by supporting clinicians in making dynamic decisions.

Development of Semi-Active Control Algorithm Using Deep Q-Network (Deep Q-Network를 이용한 준능동 제어알고리즘 개발)

  • Kim, Hyun-Su;Kang, Joo-Won
    • Journal of Korean Association for Spatial Structures
    • /
    • v.21 no.1
    • /
    • pp.79-86
    • /
    • 2021
  • Control performance of a smart tuned mass damper (TMD) mainly depends on control algorithms. A lot of control strategies have been proposed for semi-active control devices. Recently, machine learning begins to be applied to development of vibration control algorithm. In this study, a reinforcement learning among machine learning techniques was employed to develop a semi-active control algorithm for a smart TMD. The smart TMD was composed of magnetorheological damper in this study. For this purpose, an 11-story building structure with a smart TMD was selected to construct a reinforcement learning environment. A time history analysis of the example structure subject to earthquake excitation was conducted in the reinforcement learning procedure. Deep Q-network (DQN) among various reinforcement learning algorithms was used to make a learning agent. The command voltage sent to the MR damper is determined by the action produced by the DQN. Parametric studies on hyper-parameters of DQN were performed by numerical simulations. After appropriate training iteration of the DQN model with proper hyper-parameters, the DQN model for control of seismic responses of the example structure with smart TMD was developed. The developed DQN model can effectively control smart TMD to reduce seismic responses of the example structure.

An Intelligent Video Streaming Mechanism based on a Deep Q-Network for QoE Enhancement (QoE 향상을 위한 Deep Q-Network 기반의 지능형 비디오 스트리밍 메커니즘)

  • Kim, ISeul;Hong, Seongjun;Jung, Sungwook;Lim, Kyungshik
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.2
    • /
    • pp.188-198
    • /
    • 2018
  • With recent development of high-speed wide-area wireless networks and wide spread of highperformance wireless devices, the demand on seamless video streaming services in Long Term Evolution (LTE) network environments is ever increasing. To meet the demand and provide enhanced Quality of Experience (QoE) with mobile users, the Dynamic Adaptive Streaming over HTTP (DASH) has been actively studied to achieve QoE enhanced video streaming service in dynamic network environments. However, the existing DASH algorithm to select the quality of requesting video segments is based on a procedural algorithm so that it reveals a limitation to adapt its performance to dynamic network situations. To overcome this limitation this paper proposes a novel quality selection mechanism based on a Deep Q-Network (DQN) model, the DQN-based DASH ABR($DQN_{ABR}$) mechanism. The $DQN_{ABR}$ mechanism replaces the existing DASH ABR algorithm with an intelligent deep learning model which optimizes service quality to mobile users through reinforcement learning. Compared to the existing approaches, the experimental analysis shows that the proposed solution outperforms in terms of adapting to dynamic wireless network situations and improving QoE experience of end users.

A Study about Application of Indoor Autonomous Driving for Obstacle Avoidance Using Atari Deep Q Network Model (Atari Deep Q Network Model을 이용한 장애물 회피에 특화된 실내 자율주행 적용에 관한 연구)

  • Baek, Ji-Hoon;Oh, Hyeon-Tack;Lee, Seung-Jin;Kim, Sang-Hoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.715-718
    • /
    • 2018
  • 최근 다층의 인공신경망 모델이 수많은 분야에 대한 해결 방안으로 제시되고 있으며 2015년 Mnih이 고안한 DQN(Deep Q Network)는 Atari game에서 인간 수준의 성능을 보여주며 많은 이들에게 놀라움을 자아냈다. 본 논문에서는 Atari DQN Model을 실내 자율주행 모바일 로봇에 적용하여 신경망 모델이 최단 경로를 추종하며 장애물 회피를 위한 행동을 학습시키기 위해 로봇이 가지는 상태 정보들을 84*84 Mat로 가공하였고 15가지의 행동을 정의하였다. 또한 Virtual world에서 신경망 모델이 실제와 유사한 현재 상태를 입력받아 가장 최적의 정책을 학습하고 Real World에 적용하는 방법을 연구하였다.