• Title/Summary/Keyword: Deep Reinforcement Learning

Search Result 194, Processing Time 0.035 seconds

Performance Comparison of Deep Reinforcement Learning based Computation Offloading in MEC (MEC 환경에서 심층 강화학습을 이용한 오프로딩 기법의 성능비교)

  • Moon, Sungwon;Lim, Yujin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.52-55
    • /
    • 2022
  • 5G 시대에 스마트 모바일 기기가 기하급수적으로 증가하면서 멀티 액세스 엣지 컴퓨팅(MEC)이 유망한 기술로 부상했다. 낮은 지연시간 안에 계산 집약적인 서비스를 제공하기 위해 MEC 서버로 오프로딩하는 특히, 태스크 도착률과 무선 채널의 상태가 확률적인 MEC 시스템 환경에서의 오프로딩 연구가 주목받고 있다. 본 논문에서는 차량의 전력과 지연시간을 최소화하기 위해 로컬 실행을 위한 연산 자원과 오프로딩을 위한 전송 전력을 할당하는 심층 강화학습 기반의 오프로딩 기법을 제안하였다. Deep Deterministic Policy Gradient (DDPG) 기반 기법과 Deep Q-network (DQN) 기반 기법을 차량의 전력 소비량과 큐잉 지연시간 측면에서 성능을 비교 분석하였다.

Grasping a Target Object in Clutter with an Anthropomorphic Robot Hand via RGB-D Vision Intelligence, Target Path Planning and Deep Reinforcement Learning (RGB-D 환경인식 시각 지능, 목표 사물 경로 탐색 및 심층 강화학습에 기반한 사람형 로봇손의 목표 사물 파지)

  • Ryu, Ga Hyeon;Oh, Ji-Heon;Jeong, Jin Gyun;Jung, Hwanseok;Lee, Jin Hyuk;Lopez, Patricio Rivera;Kim, Tae-Seong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.9
    • /
    • pp.363-370
    • /
    • 2022
  • Grasping a target object among clutter objects without collision requires machine intelligence. Machine intelligence includes environment recognition, target & obstacle recognition, collision-free path planning, and object grasping intelligence of robot hands. In this work, we implement such system in simulation and hardware to grasp a target object without collision. We use a RGB-D image sensor to recognize the environment and objects. Various path-finding algorithms been implemented and tested to find collision-free paths. Finally for an anthropomorphic robot hand, object grasping intelligence is learned through deep reinforcement learning. In our simulation environment, grasping a target out of five clutter objects, showed an average success rate of 78.8%and a collision rate of 34% without path planning. Whereas our system combined with path planning showed an average success rate of 94% and an average collision rate of 20%. In our hardware environment grasping a target out of three clutter objects showed an average success rate of 30% and a collision rate of 97% without path planning whereas our system combined with path planning showed an average success rate of 90% and an average collision rate of 23%. Our results show that grasping a target object in clutter is feasible with vision intelligence, path planning, and deep RL.

Research on Optimal Deployment of Sonobuoy for Autonomous Aerial Vehicles Using Virtual Environment and DDPG Algorithm (가상환경과 DDPG 알고리즘을 이용한 자율 비행체의 소노부이 최적 배치 연구)

  • Kim, Jong-In;Han, Min-Seok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.2
    • /
    • pp.152-163
    • /
    • 2022
  • In this paper, we present a method to enable an unmanned aerial vehicle to drop the sonobuoy, an essential element of anti-submarine warfare, in an optimal deployment. To this end, an environment simulating the distribution of sound detection performance was configured through the Unity game engine, and the environment directly configured using Unity ML-Agents and the reinforcement learning algorithm written in Python from the outside communicated with each other and learned. In particular, reinforcement learning is introduced to prevent the accumulation of wrong actions and affect learning, and to secure the maximum detection area for the sonobuoy while the vehicle flies to the target point in the shortest time. The optimal placement of the sonobuoy was achieved by applying the Deep Deterministic Policy Gradient (DDPG) algorithm. As a result of the learning, the agent flew through the sea area and passed only the points to achieve the optimal placement among the 70 target candidates. This means that an autonomous aerial vehicle that deploys a sonobuoy in the shortest time and maximum detection area, which is the requirement for optimal placement, has been implemented.

Deep Reinforcement Learning for Visual Dialogue Agents (영상 기반 대화 에이전트를 위한 심층 강화 학습)

  • Cho, Yeongsu;Hwang, Jisu;Kim, Incheol
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.412-415
    • /
    • 2018
  • 본 논문에서는 영상 기반 대화 연구를 위한 기존 GuessWhat?! 게임 환경의 한계성을 보완한 새로운 GuessWbat+ 게임 환경을 소개한다. 또 이 환경에서 동작하는 대화 에이전트를 위한 정책 기울기 기반의 심층 강화 학습 알고리즘인 MRRB의 설계와 구현에 대해서도 설명한다. 다양한 실험을 통해, 본 논문에서 제안한 GuessWbat+ 환경과 심층 강화 학습 알고리즘의 긍정적 효과를 입증해 보인다.

Cooperative Robot for Table Balancing Using Q-learning (테이블 균형맞춤 작업이 가능한 Q-학습 기반 협력로봇 개발)

  • Kim, Yewon;Kang, Bo-Yeong
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.4
    • /
    • pp.404-412
    • /
    • 2020
  • Typically everyday human life tasks involve at least two people moving objects such as tables and beds, and the balancing of such object changes based on one person's action. However, many studies in previous work performed their tasks solely on robots without factoring human cooperation. Therefore, in this paper, we propose cooperative robot for table balancing using Q-learning that enables cooperative work between human and robot. The human's action is recognized in order to balance the table by the proposed robot whose camera takes the image of the table's state, and it performs the table-balancing action according to the recognized human action without high performance equipment. The classification of human action uses a deep learning technology, specifically AlexNet, and has an accuracy of 96.9% over 10-fold cross-validation. The experiment of Q-learning was carried out over 2,000 episodes with 200 trials. The overall results of the proposed Q-learning show that the Q function stably converged at this number of episodes. This stable convergence determined Q-learning policies for the robot actions. Video of the robotic cooperation with human over the table balancing task using the proposed Q-Learning can be found at http://ibot.knu.ac.kr/videocooperation.html.

Quadruped Robot for Walking on the Uneven Terrain and Object Detection using Deep Learning (딥러닝을 이용한 객체검출과 비평탄 지형 보행을 위한 4족 로봇)

  • Myeong Suk Pak;Seong Min Ha;Sang Hoon Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.5
    • /
    • pp.237-242
    • /
    • 2023
  • Research on high-performance walking robots is being actively conducted, and quadruped walking robots are receiving a lot of attention due to their excellent mobility and adaptability on uneven terrain, but they are difficult to introduce and utilize due to high cost. In this paper, to increase utilization by applying intelligent functions to a low-cost quadruped robot, we present a method of improving uneven terrain overcoming ability by mounting IMU and reinforcement learning on embedded board and automatically detecting objects using camera and deep learning. The robot consists of the legs of a quadruped mammal, and each leg has three degrees of freedom. We train complex terrain in simulation environments with designed 3D model and apply it to real robot. Through the application of this research method, it was confirmed that there was no significant difference in walking ability between flat and non-flat terrain, and the behavior of performing person detection in real time under limited experimental conditions was confirmed.

Modeling and Simulation on One-vs-One Air Combat with Deep Reinforcement Learning (깊은강화학습 기반 1-vs-1 공중전 모델링 및 시뮬레이션)

  • Moon, Il-Chul;Jung, Minjae;Kim, Dongjun
    • Journal of the Korea Society for Simulation
    • /
    • v.29 no.1
    • /
    • pp.39-46
    • /
    • 2020
  • The utilization of artificial intelligence (AI) in the engagement has been a key research topic in the defense field during the last decade. To pursue this utilization, it is imperative to acquire a realistic simulation to train an AI engagement agent with a synthetic, but realistic field. This paper is a case study of training an AI agent to operate with a hardware realism in the air-warfare dog-fighting. Particularly, this paper models the pursuit of an opponent in the dog-fighting setting with a gun-only engagement. In this context, the AI agent requires to make a decision on the pursuit style and intensity. We developed a realistic hardware simulator and trained the agent with a reinforcement learning. Our training shows a success resulting in a lead pursuit with a decreased engagement time and a high reward.

Reinforcement Learning for Minimizing Tardiness and Set-Up Change in Parallel Machine Scheduling Problems for Profile Shops in Shipyard (조선소 병렬 기계 공정에서의 납기 지연 및 셋업 변경 최소화를 위한 강화학습 기반의 생산라인 투입순서 결정)

  • So-Hyun Nam;Young-In Cho;Jong Hun Woo
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.60 no.3
    • /
    • pp.202-211
    • /
    • 2023
  • The profile shops in shipyards produce section steels required for block production of ships. Due to the limitations of shipyard's production capacity, a considerable amount of work is already outsourced. In addition, the need to improve the productivity of the profile shops is growing because the production volume is expected to increase due to the recent boom in the shipbuilding industry. In this study, a scheduling optimization was conducted for a parallel welding line of the profile process, with the aim of minimizing tardiness and the number of set-up changes as objective functions to achieve productivity improvements. In particular, this study applied a dynamic scheduling method to determine the job sequence considering variability of processing time. A Markov decision process model was proposed for the job sequence problem, considering the trade-off relationship between two objective functions. Deep reinforcement learning was also used to learn the optimal scheduling policy. The developed algorithm was evaluated by comparing its performance with priority rules (SSPT, ATCS, MDD, COVERT rule) in test scenarios constructed by the sampling data. As a result, the proposed scheduling algorithms outperformed than the priority rules in terms of set-up ratio, tardiness, and makespan.

Cooperative Multi-Agent Reinforcement Learning-Based Behavior Control of Grid Sortation Systems in Smart Factory (스마트 팩토리에서 그리드 분류 시스템의 협력적 다중 에이전트 강화 학습 기반 행동 제어)

  • Choi, HoBin;Kim, JuBong;Hwang, GyuYoung;Kim, KwiHoon;Hong, YongGeun;Han, YounHee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.8
    • /
    • pp.171-180
    • /
    • 2020
  • Smart Factory consists of digital automation solutions throughout the production process, including design, development, manufacturing and distribution, and it is an intelligent factory that installs IoT in its internal facilities and machines to collect process data in real time and analyze them so that it can control itself. The smart factory's equipment works in a physical combination of numerous hardware, rather than a virtual character being driven by a single object, such as a game. In other words, for a specific common goal, multiple devices must perform individual actions simultaneously. By taking advantage of the smart factory, which can collect process data in real time, if reinforcement learning is used instead of general machine learning, behavior control can be performed without the required training data. However, in the real world, it is impossible to learn more than tens of millions of iterations due to physical wear and time. Thus, this paper uses simulators to develop grid sortation systems focusing on transport facilities, one of the complex environments in smart factory field, and design cooperative multi-agent-based reinforcement learning to demonstrate efficient behavior control.

Reinforcement learning model for water distribution system design (상수도관망 설계에의 강화학습 적용방안 연구)

  • Jaehyun Kim;Donghwi Jung
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.229-229
    • /
    • 2023
  • 강화학습은 에이전트(agent)가 주어진 환경(environment)과의 상호작용을 통해서 상태(state)를 변화시켜가며 최대의 보상(reward)을 얻을 수 있도록 최적의 행동(action)을 학습하는 기계학습법을 의미한다. 최근 알파고와 같은 게임뿐만 아니라 자율주행 자동차, 로봇 제어 등 다양한 분야에서 널리 사용되고 있다. 상수도관망 분야의 경우에도 펌프 운영, 밸브 운영, 센서 최적 위치 선정 등 여러 문제에 적용되었으나, 설계에 강화학습을 적용한 연구는 없었다. 설계의 경우, 관망의 크기가 커짐에 따라 알고리즘의 탐색 공간의 크기가 증가하여 기존의 최적화 알고리즘을 이용하는 것에는 한계가 존재한다. 따라서 본 연구는 강화학습을 이용하여 상수도관망의 구성요소와 환경요인 간의 복잡한 상호작용을 고려하는 설계 방법론을 제안한다. 모델의 에이전트를 딥 강화학습(Deep Reinforcement Learning)으로 구성하여, 상태 및 행동 공간이 커 발생하는 고차원성 문제를 해결하였다. 또한, 해당 모델의 상태 및 보상으로 절점에서의 압력 및 수요량과 설계비용을 고려하여 적절한 수량과 수압의 용수 공급이 가능한 경제적인 관망을 설계하도록 하였다. 모델의 행동은 실제로 공학자가 설계하듯이 절점마다 하나씩 차례대로 다른 절점과의 연결 여부를 결정하는 것으로, 이를 통해 관망의 레이아웃(layout)과 관경을 결정한다. 본 연구에서 제안한 방법론을 규모가 큰 그리드 네트워크에 적용하여 모델을 검증하였으며, 고려해야 할 변수의 개수가 많음에도 불구하고 목적에 부합하는 관망을 설계할 수 있었다. 모델 학습과정 동안 에피소드의 평균 길이와 보상의 크기 등의 변화를 비교하여, 제안한 모델의 학습 능력을 평가 및 보완하였다. 향후 강화학습 모델을 통해 신뢰성(reliability) 또는 탄력성(resilience)과 같은 시스템의 성능까지 고려한 설계가 가능할 것으로 기대한다.

  • PDF