• Title/Summary/Keyword: DQN

Search Result 68, Processing Time 0.028 seconds

Development of Interior Self-driving Service Robot Using Embedded Board Based on Reinforcement Learning (강화학습 기반 임베디드 보드를 활용한 실내자율 주행 서비스 로봇 개발)

  • Oh, Hyeon-Tack;Baek, Ji-Hoon;Lee, Seung-Jin;Kim, Sang-Hoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.537-540
    • /
    • 2018
  • 본 논문은 Jetson_TX2(임베디드 보드)의 ROS(Robot Operating System)기반으로 맵 지도를 작성하고, SLAM 및 DQN(Deep Q-Network)을 이용한 목적지까지의 이동명령(목표 선속도, 목표 각속도)을 자이로센서로 측정한 현재 각속도를 이용하여 Cortex-M3의 기반의 MCU(Micro Controllor Unit)에 하달하여 엔코더(encoder) 모터에서 측정한 현재 선속도와 자이로센서에서 측정한 각속도 값을 이용하여 PID제어를 통한 실내 자율주행 서비스 로봇.

Deep Q-Learning Network Model for Container Ship Master Stowage Plan (컨테이너 선박 마스터 적하계획을 위한 심층강화학습 모형)

  • Shin, Jae-Young;Ryu, Hyun-Seung
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.1
    • /
    • pp.19-29
    • /
    • 2021
  • In the Port Logistics system, Container Stowage planning is an important issue for cost-effective efficiency improvements. At present, Planners are mainly carrying out Stowage planning by manual or semi-automatically. However, as the trend of super-large container ships continues, it is difficult to calculate an efficient Stowage plan with manpower. With the recent rapid development of artificial intelligence-related technologies, many studies have been conducted to apply enhanced learning to optimization problems. Accordingly, in this paper, we intend to develop and present a Deep Q-Learning Network model for the Master Stowage planning of Container ships.

A Reinforcement learning-based for Multi-user Task Offloading and Resource Allocation in MEC

  • Xiang, Tiange;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.45-47
    • /
    • 2022
  • Mobile edge computing (MEC), which enables mobile terminals to offload computational tasks to a server located at the user's edge, is considered an effective way to reduce the heavy computational burden and achieve efficient computational offloading. In this paper, we study a multi-user MEC system in which multiple user devices (UEs) can offload computation to the MEC server via a wireless channel. To solve the resource allocation and task offloading problem, we take the total cost of latency and energy consumption of all UEs as our optimization objective. To minimize the total cost of the considered MEC system, we propose an DRL-based method to solve the resource allocation problem in wireless MEC. Specifically, we propose a Asynchronous Advantage Actor-Critic (A3C)-based scheme. Asynchronous Advantage Actor-Critic (A3C) is applied to this framework and compared with DQN, and Double Q-Learning simulation results show that this scheme significantly reduces the total cost compared to other resource allocation schemes

Edge Caching Based on Reinforcement Learning Considering Edge Coverage Overlap in Vehicle Environment (차량 환경에서 엣지 커버리지 오버랩을 고려한 강화학습 기반의 엣지 캐싱)

  • Choi, Yoonjeong;Lim, Yujin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.110-113
    • /
    • 2022
  • 인터넷을 통해 주위 사물과 연결된 차량은 사용자에게 편리성을 제공하기 위해 다양한 콘텐츠를 요구하는데 클라우드로부터 가져오는 시간이 비교적 오래 걸리기 때문에 차량과 물리적으로 가까운 위치에 캐싱하는 기법들이 등장하고 있다. 본 논문에서는 기반 시설이 밀집하게 설치된 도시 환경에서 maximum distance separable(MDS) 코딩을 사용해 road side unit(RSU)에 캐싱하는 방법에 대해 연구하였다. RSU의 중복된 서비스 커버리지 지역을 고려하여 차량의 콘텐츠 요구에 대한 RSU hit ratio를 높이기 위해 deep Q-learning(DQN)를 사용하였다. 실험 결과 비교 알고리즘보다 hit raito 측면에서 더 높은 성능을 보이는 것을 증명하였다.

Deep Reinforcement Learning-Based Edge Caching in Heterogeneous Networks

  • Yoonjeong, Choi; Yujin, Lim
    • Journal of Information Processing Systems
    • /
    • v.18 no.6
    • /
    • pp.803-812
    • /
    • 2022
  • With the increasing number of mobile device users worldwide, utilizing mobile edge computing (MEC) devices close to users for content caching can reduce transmission latency than receiving content from a server or cloud. However, because MEC has limited storage capacity, it is necessary to determine the content types and sizes to be cached. In this study, we investigate a caching strategy that increases the hit ratio from small base stations (SBSs) for mobile users in a heterogeneous network consisting of one macro base station (MBS) and multiple SBSs. If there are several SBSs that users can access, the hit ratio can be improved by reducing duplicate content and increasing the diversity of content in SBSs. We propose a Deep Q-Network (DQN)-based caching strategy that considers time-varying content popularity and content redundancy in multiple SBSs. Content is stored in the SBS in a divided form using maximum distance separable (MDS) codes to enhance the diversity of the content. Experiments in various environments show that the proposed caching strategy outperforms the other methods in terms of hit ratio.

Performance Comparison of Deep Reinforcement Learning based Computation Offloading in MEC (MEC 환경에서 심층 강화학습을 이용한 오프로딩 기법의 성능비교)

  • Moon, Sungwon;Lim, Yujin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.52-55
    • /
    • 2022
  • 5G 시대에 스마트 모바일 기기가 기하급수적으로 증가하면서 멀티 액세스 엣지 컴퓨팅(MEC)이 유망한 기술로 부상했다. 낮은 지연시간 안에 계산 집약적인 서비스를 제공하기 위해 MEC 서버로 오프로딩하는 특히, 태스크 도착률과 무선 채널의 상태가 확률적인 MEC 시스템 환경에서의 오프로딩 연구가 주목받고 있다. 본 논문에서는 차량의 전력과 지연시간을 최소화하기 위해 로컬 실행을 위한 연산 자원과 오프로딩을 위한 전송 전력을 할당하는 심층 강화학습 기반의 오프로딩 기법을 제안하였다. Deep Deterministic Policy Gradient (DDPG) 기반 기법과 Deep Q-network (DQN) 기반 기법을 차량의 전력 소비량과 큐잉 지연시간 측면에서 성능을 비교 분석하였다.

The Performance Comparative Analysis System for Stock Price Forecasting on AI Environment (AI 기반환경의 주식 시세예측을 위한 성능 비교분석 시스템)

  • Lee, Cheol-Hyeon;Oh, Ryumduck
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.01a
    • /
    • pp.127-128
    • /
    • 2022
  • 최근 많은 증권사 및 다양한 금융사기업에서 투자자의 주식투자를 돕는 투자자문 인공지능, 로보어드바이저를 제안하고 활용한다. 본 논문에서는 증권사 등에서 사용되고 있는 주식 시세예측 알고리즘의 성능을 상호 비교분석한다. 주식 시계열 데이터 예측에 용이한 4가지의 인공지능 알고리즘인 LSTM, GRU, 딥Q 네트워크강화학습, XGBoost 알고리즘의 성능을 분석하고 비교하는 시스템을 구현하였다. 본 연구에서는 구현된 성능 분석 시스템을 통해 어떤 알고리즘이 주식 시세를 예측하고 활용하기 위해 가장 좋은 성능을 가졌는지 비교분석하고 해당 시스템의 결과분석이 주식예측에 어떠한 영향을 주는지를 평가한다.

  • PDF

Optimizing Train Dwell Times during Commuter Hours using Reinforcement Learning (강화 학습을 이용한 출퇴근 시간대 열차 정차 시간 최적화)

  • SuJeong Choi;Yujin Lim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.530-533
    • /
    • 2023
  • 대중교통은 현대 사회에서 필수적인 요소이며 특히 출퇴근 시간대에는 도로 교통 상황에 영향을 덜 받는 지하철의 수요가 높은 편이다. 그러나 제한된 물리적 자원으로 인해 열차 내 혼잡도 증가와 열차 운행 지연은 불가피한 상황이다. 본 논문에서는 이를 해결하기 위한 한 가지 방법으로 강화학습기반 DQN 알고리즘을 이용한 열차 정차 시간 최적화 기법을 제안했다. 열차 정차 시간과 승차 인원 모두 고려하면서 최적화를 진행했을 때와 그렇지 않았을 때를 비교하면서 실험을 진행하여 성능을 분석했다.

A Study on Application of Reinforcement Learning Algorithm Using Pixel Data (픽셀 데이터를 이용한 강화 학습 알고리즘 적용에 관한 연구)

  • Moon, Saemaro;Choi, Yonglak
    • Journal of Information Technology Services
    • /
    • v.15 no.4
    • /
    • pp.85-95
    • /
    • 2016
  • Recently, deep learning and machine learning have attracted considerable attention and many supporting frameworks appeared. In artificial intelligence field, a large body of research is underway to apply the relevant knowledge for complex problem-solving, necessitating the application of various learning algorithms and training methods to artificial intelligence systems. In addition, there is a dearth of performance evaluation of decision making agents. The decision making agent that can find optimal solutions by using reinforcement learning methods designed through this research can collect raw pixel data observed from dynamic environments and make decisions by itself based on the data. The decision making agent uses convolutional neural networks to classify situations it confronts, and the data observed from the environment undergoes preprocessing before being used. This research represents how the convolutional neural networks and the decision making agent are configured, analyzes learning performance through a value-based algorithm and a policy-based algorithm : a Deep Q-Networks and a Policy Gradient, sets forth their differences and demonstrates how the convolutional neural networks affect entire learning performance when using pixel data. This research is expected to contribute to the improvement of artificial intelligence systems which can efficiently find optimal solutions by using features extracted from raw pixel data.

Reinforcement Learning based on Deep Deterministic Policy Gradient for Roll Control of Underwater Vehicle (수중운동체의 롤 제어를 위한 Deep Deterministic Policy Gradient 기반 강화학습)

  • Kim, Su Yong;Hwang, Yeon Geol;Moon, Sung Woong
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.24 no.5
    • /
    • pp.558-568
    • /
    • 2021
  • The existing underwater vehicle controller design is applied by linearizing the nonlinear dynamics model to a specific motion section. Since the linear controller has unstable control performance in a transient state, various studies have been conducted to overcome this problem. Recently, there have been studies to improve the control performance in the transient state by using reinforcement learning. Reinforcement learning can be largely divided into value-based reinforcement learning and policy-based reinforcement learning. In this paper, we propose the roll controller of underwater vehicle based on Deep Deterministic Policy Gradient(DDPG) that learns the control policy and can show stable control performance in various situations and environments. The performance of the proposed DDPG based roll controller was verified through simulation and compared with the existing PID and DQN with Normalized Advantage Functions based roll controllers.