• Title/Summary/Keyword: 심층강화학습

Search Result 105, Processing Time 0.025 seconds

Federated Deep Reinforcement Learning Based on Privacy Preserving for Industrial Internet of Things (산업용 사물 인터넷을 위한 프라이버시 보존 연합학습 기반 심층 강화학습 모델)

  • Chae-Rim Han;Sun-Jin Lee;Il-Gu Lee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1055-1065
    • /
    • 2023
  • Recently, various studies using deep reinforcement learning (deep RL) technology have been conducted to solve complex problems using big data collected at industrial internet of things. Deep RL uses reinforcement learning"s trial-and-error algorithms and cumulative compensation functions to generate and learn its own data and quickly explore neural network structures and parameter decisions. However, studies so far have shown that the larger the size of the learning data is, the higher are the memory usage and search time, and the lower is the accuracy. In this study, model-agnostic learning for efficient federated deep RL was utilized to solve privacy invasion by increasing robustness as 55.9% and achieve 97.8% accuracy, an improvement of 5.5% compared with the comparative optimization-based meta learning models, and to reduce the delay time by 28.9% on average.

Research Trends on Deep Reinforcement Learning (심층 강화학습 기술 동향)

  • Jang, S.Y.;Yoon, H.J.;Park, N.S.;Yun, J.K.;Son, Y.S.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.4
    • /
    • pp.1-14
    • /
    • 2019
  • Recent trends in deep reinforcement learning (DRL) have revealed the considerable improvements to DRL algorithms in terms of performance, learning stability, and computational efficiency. DRL also enables the scenarios that it covers (e.g., partial observability; cooperation, competition, coexistence, and communications among multiple agents; multi-task; decentralized intelligence) to be vastly expanded. These features have cultivated multi-agent reinforcement learning research. DRL is also expanding its applications from robotics to natural language processing and computer vision into a wide array of fields such as finance, healthcare, chemistry, and even art. In this report, we briefly summarize various DRL techniques and research directions.

Performance Analysis of Deep Reinforcement Learning for Crop Yield Prediction (작물 생산량 예측을 위한 심층강화학습 성능 분석)

  • Ohnmar Khin;Sung-Keun Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.1
    • /
    • pp.99-106
    • /
    • 2023
  • Recently, many studies on crop yield prediction using deep learning technology have been conducted. These algorithms have difficulty constructing a linear map between input data sets and crop prediction results. Furthermore, implementation of these algorithms positively depends on the rate of acquired attributes. Deep reinforcement learning can overcome these limitations. This paper analyzes the performance of DQN, Double DQN and Dueling DQN to improve crop yield prediction. The DQN algorithm retains the overestimation problem. Whereas, Double DQN declines the over-estimations and leads to getting better results. The proposed models achieves these by reducing the falsehood and increasing the prediction exactness.

Effective Utilization of Domain Knowledge for Relational Reinforcement Learning (관계형 강화 학습을 위한 도메인 지식의 효과적인 활용)

  • Kang, MinKyo;Kim, InCheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.141-148
    • /
    • 2022
  • Recently, reinforcement learning combined with deep neural network technology has achieved remarkable success in various fields such as board games such as Go and chess, computer games such as Atari and StartCraft, and robot object manipulation tasks. However, such deep reinforcement learning describes states, actions, and policies in vector representation. Therefore, the existing deep reinforcement learning has some limitations in generality and interpretability of the learned policy, and it is difficult to effectively incorporate domain knowledge into policy learning. On the other hand, dNL-RRL, a new relational reinforcement learning framework proposed to solve these problems, uses a kind of vector representation for sensor input data and lower-level motion control as in the existing deep reinforcement learning. However, for states, actions, and learned policies, It uses a relational representation with logic predicates and rules. In this paper, we present dNL-RRL-based policy learning for transportation mobile robots in a manufacturing environment. In particular, this study proposes a effective method to utilize the prior domain knowledge of human experts to improve the efficiency of relational reinforcement learning. Through various experiments, we demonstrate the performance improvement of the relational reinforcement learning by using domain knowledge as proposed in this paper.

Generation of ship's passage plan based on deep reinforcement learning (심층 강화학습 기반의 선박 항로계획 수립)

  • Hyeong-Tak Lee;Hyun Yang;Ik-Soon Cho
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2023.11a
    • /
    • pp.230-231
    • /
    • 2023
  • This study proposes a deep reinforcement learning-based algorithm to automatically generate a ship's passage plan. First, Busan Port and Gwangyang Port were selected as target areas, and a container ship with a draft of 16m was designated as the target vessel. The experimental results showed that the ship's passage plan generated using deep reinforcement learning was more efficient than the Q-learning-based algorithm used in previous research. This algorithm presents a method to generate a ship's passage plan automatically and can contribute to improving maritime safety and efficiency.

  • PDF

상태 표현 방식에 따른 심층 강화 학습 기반 캐릭터 제어기의 학습 성능 비교

  • Son, Chae-Jun;Lee, Yun-Sang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.14-15
    • /
    • 2021
  • 물리 시뮬레이션 기반의 캐릭터 동작 제어 문제를 강화학습을 이용하여 해결해 나가는 연구들이 계속해서 진행되고 있다. 이에 따라 이 문제를 강화학습을 이용하여 풀 때, 영향을 미치는 요소에 대한 연구도 계속해서 진행되고 있다. 우리는 지금까지 이뤄지지 않았던 상태 표현 방식에 따른 강화학습에 미치는 영향을 분석하였다. 첫째로, root attached frame, root aligned frame, projected aligned frame 3 가지 좌표계를 정의하였고, 이에 대해 표현된 상태를 이용하여 강화학습에 미치는 영향을 분석하였다. 둘째로, 동역학적 상태를 나타내는 캐릭터 관절의 위치, 각도에 따라 학습에 어떠한 영향을 미치는지 분석하였다.

  • PDF

Comparison of learning performance of character controller based on deep reinforcement learning according to state representation (상태 표현 방식에 따른 심층 강화 학습 기반 캐릭터 제어기의 학습 성능 비교)

  • Sohn, Chaejun;Kwon, Taesoo;Lee, Yoonsang
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.5
    • /
    • pp.55-61
    • /
    • 2021
  • The character motion control based on physics simulation using reinforcement learning continue to being carried out. In order to solve a problem using reinforcement learning, the network structure, hyperparameter, state, action and reward must be properly set according to the problem. In many studies, various combinations of states, action and rewards have been defined and successfully applied to problems. Since there are various combinations in defining state, action and reward, many studies are conducted to analyze the effect of each element to find the optimal combination that improves learning performance. In this work, we analyzed the effect on reinforcement learning performance according to the state representation, which has not been so far. First we defined three coordinate systems: root attached frame, root aligned frame, and projected aligned frame. and then we analyze the effect of state representation by three coordinate systems on reinforcement learning. Second, we analyzed how it affects learning performance when various combinations of joint positions and angles for state.

Luxo character control using deep reinforcement learning (심층 강화 학습을 이용한 Luxo 캐릭터의 제어)

  • Lee, Jeongmin;Lee, Yoonsang
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.4
    • /
    • pp.1-8
    • /
    • 2020
  • Motion synthesis using physics-based controllers can generate a character animation that interacts naturally with the given environment and other characters. Recently, various methods using deep neural networks have improved the quality of motions generated by physics-based controllers. In this paper, we present a control policy learned by deep reinforcement learning (DRL) that enables Luxo, the mascot character of Pixar animation studio, to run towards a random goal location while imitating a reference motion and maintaining its balance. Instead of directly training our DRL network to make Luxo reach a goal location, we use a reference motion that is generated to keep Luxo animation's jumping style. The reference motion is generated by linearly interpolating predetermined poses, which are defined with Luxo character's each joint angle. By applying our method, we could confirm a better Luxo policy compared to the one without any reference motions.

A Comparative Analysis of Reinforcement Learning Activation Functions for Parking of Autonomous Vehicles (자율주행 자동차의 주차를 위한 강화학습 활성화 함수 비교 분석)

  • Lee, Dongcheul
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.6
    • /
    • pp.75-81
    • /
    • 2022
  • Autonomous vehicles, which can dramatically solve the lack of parking spaces, are making great progress through deep reinforcement learning. Activation functions are used for deep reinforcement learning, and various activation functions have been proposed, but their performance deviations were large depending on the application environment. Therefore, finding the optimal activation function depending on the environment is important for effective learning. This paper analyzes 12 functions mainly used in reinforcement learning to compare and evaluate which activation function is most effective when autonomous vehicles use deep reinforcement learning to learn parking. To this end, a performance evaluation environment was established, and the average reward of each activation function was compared with the success rate, episode length, and vehicle speed. As a result, the highest reward was the case of using GELU, and the ELU was the lowest. The reward difference between the two activation functions was 35.2%.

Power Trading System through the Prediction of Demand and Supply in Distributed Power System Based on Deep Reinforcement Learning (심층강화학습 기반 분산형 전력 시스템에서의 수요와 공급 예측을 통한 전력 거래시스템)

  • Lee, Seongwoo;Seon, Joonho;Kim, Soo-Hyun;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.6
    • /
    • pp.163-171
    • /
    • 2021
  • In this paper, the energy transaction system was optimized by applying a resource allocation algorithm and deep reinforcement learning in the distributed power system. The power demand and supply environment were predicted by deep reinforcement learning. We propose a system that pursues common interests in power trading and increases the efficiency of long-term power transactions in the paradigm shift from conventional centralized to distributed power systems in the power trading system. For a realistic energy simulation model and environment, we construct the energy market by learning weather and monthly patterns adding Gaussian noise. In simulation results, we confirm that the proposed power trading systems are cooperative with each other, seek common interests, and increase profits in the prolonged energy transaction.