• 제목/요약/키워드: Deep Reinforcement Learning

검색결과 199건 처리시간 0.022초

다중 교차로에서 협력적 교통신호제어에 대한 연구 (A Study on Cooperative Traffic Signal Control at multi-intersection)

  • 김대호;정옥란
    • 전기전자학회논문지
    • /
    • 제23권4호
    • /
    • pp.1381-1386
    • /
    • 2019
  • 도시의 교통 혼잡 문제가 심각해지면서 지능형 교통신호제어가 활발하게 연구되고 있다. 강화학습은 교통신호제어에 가장 활발하게 사용되고 있는 알고리즘으로 최근에는 심층 강화학습 알고리즘이 관심을 끌고 있다. 또한 심층 강화학습 알고리즘이 다양한 분야에서 높은 성능을 보이면서 심층 강화학습의 확장 버전들이 빠른 속도로 등장했다. 하지만 기존 교통신호제어 연구들은 대부분 단일 교차로 환경에서 진행되었으며, 단일 교차로의 교통 혼잡만 완화하는 방법은 도시 전체의 교통 상황을 고려하지 못한다는 한계가 있다. 본 논문에서는 다중 교차로 환경에서 협력적 교통신호제어를 제안한다. 신호제어 알고리즘에는 심층 강화학습의 확장 버전들이 결합된 알고리즘을 적용했으며 다중 교차로를 효율적으로 제어하기 위해 인접한 교차로의 교통 상황을 고려하였다. 실험에서는 제안하는 알고리즘과 기존 심층 강화학습 알고리즘을 비교하였으며, 더 나아가 협력적 방법이 적용된 모델과 적용되지 않은 모델의 실험 결과를 보여줌으로써 높은 성능을 증명한다.

시뮬레이션 환경에서의 DQN을 이용한 강화 학습 기반의 무인항공기 경로 계획 (Path Planning of Unmanned Aerial Vehicle based Reinforcement Learning using Deep Q Network under Simulated Environment)

  • 이근형;김신덕
    • 반도체디스플레이기술학회지
    • /
    • 제16권3호
    • /
    • pp.127-130
    • /
    • 2017
  • In this research, we present a path planning method for an autonomous flight of unmanned aerial vehicles (UAVs) through reinforcement learning under simulated environment. We design the simulator for reinforcement learning of uav. Also we implement interface for compatibility of Deep Q-Network(DQN) and simulator. In this paper, we perform reinforcement learning through the simulator and DQN, and use Q-learning algorithm, which is a kind of reinforcement learning algorithms. Through experimentation, we verify performance of DQN-simulator. Finally, we evaluated the learning results and suggest path planning strategy using reinforcement learning.

  • PDF

An autonomous radiation source detection policy based on deep reinforcement learning with generalized ability in unknown environments

  • Hao Hu;Jiayue Wang;Ai Chen;Yang Liu
    • Nuclear Engineering and Technology
    • /
    • 제55권1호
    • /
    • pp.285-294
    • /
    • 2023
  • Autonomous radiation source detection has long been studied for radiation emergencies. Compared to conventional data-driven or path planning methods, deep reinforcement learning shows a strong capacity in source detection while still lacking the generalized ability to the geometry in unknown environments. In this work, the detection task is decomposed into two subtasks: exploration and localization. A hierarchical control policy (HC) is proposed to perform the subtasks at different stages. The low-level controller learns how to execute the individual subtasks by deep reinforcement learning, and the high-level controller determines which subtasks should be executed at the current stage. In experimental tests under different geometrical conditions, HC achieves the best performance among the autonomous decision policies. The robustness and generalized ability of the hierarchy have been demonstrated.

표정 피드백을 이용한 딥강화학습 기반 협력로봇 개발 (Deep Reinforcement Learning-Based Cooperative Robot Using Facial Feedback)

  • 전해인;강정훈;강보영
    • 로봇학회논문지
    • /
    • 제17권3호
    • /
    • pp.264-272
    • /
    • 2022
  • Human-robot cooperative tasks are increasingly required in our daily life with the development of robotics and artificial intelligence technology. Interactive reinforcement learning strategies suggest that robots learn task by receiving feedback from an experienced human trainer during a training process. However, most of the previous studies on Interactive reinforcement learning have required an extra feedback input device such as a mouse or keyboard in addition to robot itself, and the scenario where a robot can interactively learn a task with human have been also limited to virtual environment. To solve these limitations, this paper studies training strategies of robot that learn table balancing tasks interactively using deep reinforcement learning with human's facial expression feedback. In the proposed system, the robot learns a cooperative table balancing task using Deep Q-Network (DQN), which is a deep reinforcement learning technique, with human facial emotion expression feedback. As a result of the experiment, the proposed system achieved a high optimal policy convergence rate of up to 83.3% in training and successful assumption rate of up to 91.6% in testing, showing improved performance compared to the model without human facial expression feedback.

스마트 빌딩 시스템을 위한 심층 강화학습 기반 양방향 전력거래 협상 기법 (Bi-directional Electricity Negotiation Scheme based on Deep Reinforcement Learning Algorithm in Smart Building Systems)

  • 이동구;이지영;경찬욱;김진영
    • 한국인터넷방송통신학회논문지
    • /
    • 제21권5호
    • /
    • pp.215-219
    • /
    • 2021
  • 본 논문에서는 스마트 빌딩 시스템과 전력망이 각각의 전력거래 희망가격을 제안하고 조정하는 양방향 전력거래 협상 기법에 심층 강화학습 기법을 적용한 전력거래 기법을 제안한다. 심층 강화학습 기법 중 하나인 deep Q network 알고리즘을 적용하여 스마트 빌딩과 전력망의 거래 희망가격을 조정하도록 하였다. 제안하는 심층 강화학습 기반 양방향 전력거래 협상 알고리즘은 학습과정에서 평균 43.78회의 협상을 통해 가격 협의에 이르는 것을 실험을 통해 확인하였다. 또한, 본 연구에서 설정한 협상 시나리오에 따라 스마트 빌딩과 전력망이 거래 희망가격을 조정하는 과정을 실험을 통해 확인하였다.

공 던지기 로봇의 정책 예측 심층 강화학습 (Deep Reinforcement Learning of Ball Throwing Robot's Policy Prediction)

  • 강영균;이철수
    • 로봇학회논문지
    • /
    • 제15권4호
    • /
    • pp.398-403
    • /
    • 2020
  • Robot's throwing control is difficult to accurately calculate because of air resistance and rotational inertia, etc. This complexity can be solved by using machine learning. Reinforcement learning using reward function puts limit on adapting to new environment for robots. Therefore, this paper applied deep reinforcement learning using neural network without reward function. Throwing is evaluated as a success or failure. AI network learns by taking the target position and control policy as input and yielding the evaluation as output. Then, the task is carried out by predicting the success probability according to the target location and control policy and searching the policy with the highest probability. Repeating this task can result in performance improvements as data accumulates. And this model can even predict tasks that were not previously attempted which means it is an universally applicable learning model for any new environment. According to the data results from 520 experiments, this learning model guarantees 75% success rate.

심층강화학습 기반의 경기순환 주기별 효율적 자산 배분 모델 연구 (A Study on DRL-based Efficient Asset Allocation Model for Economic Cycle-based Portfolio Optimization)

  • 정낙현;오태연;김강희
    • 품질경영학회지
    • /
    • 제51권4호
    • /
    • pp.573-588
    • /
    • 2023
  • Purpose: This study presents a research approach that utilizes deep reinforcement learning to construct optimal portfolios based on the business cycle for stocks and other assets. The objective is to develop effective investment strategies that adapt to the varying returns of assets in accordance with the business cycle. Methods: In this study, a diverse set of time series data, including stocks, is collected and utilized to train a deep reinforcement learning model. The proposed approach optimizes asset allocation based on the business cycle, particularly by gathering data for different states such as prosperity, recession, depression, and recovery and constructing portfolios optimized for each phase. Results: Experimental results confirm the effectiveness of the proposed deep reinforcement learning-based approach in constructing optimal portfolios tailored to the business cycle. The utility of optimizing portfolio investment strategies for each phase of the business cycle is demonstrated. Conclusion: This paper contributes to the construction of optimal portfolios based on the business cycle using a deep reinforcement learning approach, providing investors with effective investment strategies that simultaneously seek stability and profitability. As a result, investors can adopt stable and profitable investment strategies that adapt to business cycle volatility.

심층 강화학습 기반 자율운항 CTV의 해상풍력발전단지 내 장애물 회피 시스템 (Obstacle Avoidance System for Autonomous CTVs in Offshore Wind Farms Based on Deep Reinforcement Learning)

  • 김진균;전해명;노재규
    • 대한임베디드공학회논문지
    • /
    • 제19권3호
    • /
    • pp.131-139
    • /
    • 2024
  • Crew Transfer Vessels (CTVs) are primarily used for the maintenance of offshore wind farms. Despite being manually operated by professional captains and crew, collisions with other ships and marine structures still occur. To prevent this, the introduction of autonomous navigation systems to CTVs is necessary. In this study, research on the obstacle avoidance system of the autonomous navigation system for CTVs was conducted. In particular, research on obstacle avoidance simulation for CTVs using deep reinforcement learning was carried out, taking into account the currents and wind loads in offshore wind farms. For this purpose, 3 degrees of freedom ship maneuvering modeling for CTVs considering the currents and wind loads in offshore wind farms was performed, and a simulation environment for offshore wind farms was implemented to train and test the deep reinforcement learning agent. Specifically, this study conducted research on obstacle avoidance maneuvers using MATD3 within deep reinforcement learning, and as a result, it was confirmed that the model, which underwent training over 10,000 episodes, could successfully avoid both static and moving obstacles. This confirms the conclusion that the application of the methods proposed in this study can successfully facilitate obstacle avoidance for autonomous navigation CTVs within offshore wind farms.

현실 세계에서의 로봇 파지 작업을 위한 정책/가치 심층 강화학습 플랫폼 개발 (Development of an Actor-Critic Deep Reinforcement Learning Platform for Robotic Grasping in Real World)

  • 김태원;박예성;김종복;박영빈;서일홍
    • 로봇학회논문지
    • /
    • 제15권2호
    • /
    • pp.197-204
    • /
    • 2020
  • In this paper, we present a learning platform for robotic grasping in real world, in which actor-critic deep reinforcement learning is employed to directly learn the grasping skill from raw image pixels and rarely observed rewards. This is a challenging task because existing algorithms based on deep reinforcement learning require an extensive number of training data or massive computational cost so that they cannot be affordable in real world settings. To address this problems, the proposed learning platform basically consists of two training phases; a learning phase in simulator and subsequent learning in real world. Here, main processing blocks in the platform are extraction of latent vector based on state representation learning and disentanglement of a raw image, generation of adapted synthetic image using generative adversarial networks, and object detection and arm segmentation for the disentanglement. We demonstrate the effectiveness of this approach in a real environment.

Applying Deep Reinforcement Learning to Improve Throughput and Reduce Collision Rate in IEEE 802.11 Networks

  • Ke, Chih-Heng;Astuti, Lia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권1호
    • /
    • pp.334-349
    • /
    • 2022
  • The effectiveness of Wi-Fi networks is greatly influenced by the optimization of contention window (CW) parameters. Unfortunately, the conventional approach employed by IEEE 802.11 wireless networks is not scalable enough to sustain consistent performance for the increasing number of stations. Yet, it is still the default when accessing channels for single-users of 802.11 transmissions. Recently, there has been a spike in attempts to enhance network performance using a machine learning (ML) technique known as reinforcement learning (RL). Its advantage is interacting with the surrounding environment and making decisions based on its own experience. Deep RL (DRL) uses deep neural networks (DNN) to deal with more complex environments (such as continuous state spaces or actions spaces) and to get optimum rewards. As a result, we present a new approach of CW control mechanism, which is termed as contention window threshold (CWThreshold). It uses the DRL principle to define the threshold value and learn optimal settings under various network scenarios. We demonstrate our proposed method, known as a smart exponential-threshold-linear backoff algorithm with a deep Q-learning network (SETL-DQN). The simulation results show that our proposed SETL-DQN algorithm can effectively improve the throughput and reduce the collision rates.