• 제목/요약/키워드: DDPG Algorithm

검색결과 7건 처리시간 0.022초

Deep Deterministic Policy Gradient 알고리즘을 응용한 자전거의 자율 주행 제어 (Autonomous control of bicycle using Deep Deterministic Policy Gradient Algorithm)

  • 최승윤;레 팜 투옌;정태충
    • 융합보안논문지
    • /
    • 제18권3호
    • /
    • pp.3-9
    • /
    • 2018
  • DDPG(Deep Deterministic Policy Gradient)알고리즘은 인공신경망과 강화학습을 사용하여 학습하는 알고리즘이다. 최근 많은 연구가 이루어지고 있는 강화학습과 관련된 연구 중에서도 DDPG 알고리즘은 오프폴리시로 학습하기 때문에 잘못된 행동이 누적되어 학습에 영향을 미치는 경우를 방지하는 장점이 있다. 본 연구에서는 DDPG 알고리즘을 응용하여 자전거를 자율주행 하도록 제어하는 실험을 진행하였다. 다양한 환경을 설정하여 시뮬레이션을 진행하였고 실험을 통해서 사용된 방법이 시뮬레이션 상에서 안정적으로 동작함을 보였다.

  • PDF

스마트 TMD 제어를 위한 강화학습 알고리즘 성능 검토 (Performance Evaluation of Reinforcement Learning Algorithm for Control of Smart TMD)

  • 강주원;김현수
    • 한국공간구조학회논문집
    • /
    • 제21권2호
    • /
    • pp.41-48
    • /
    • 2021
  • A smart tuned mass damper (TMD) is widely studied for seismic response reduction of various structures. Control algorithm is the most important factor for control performance of a smart TMD. This study used a Deep Deterministic Policy Gradient (DDPG) among reinforcement learning techniques to develop a control algorithm for a smart TMD. A magnetorheological (MR) damper was used to make the smart TMD. A single mass model with the smart TMD was employed to make a reinforcement learning environment. Time history analysis simulations of the example structure subject to artificial seismic load were performed in the reinforcement learning process. Critic of policy network and actor of value network for DDPG agent were constructed. The action of DDPG agent was selected as the command voltage sent to the MR damper. Reward for the DDPG action was calculated by using displacement and velocity responses of the main mass. Groundhook control algorithm was used as a comparative control algorithm. After 10,000 episode training of the DDPG agent model with proper hyper-parameters, the semi-active control algorithm for control of seismic responses of the example structure with the smart TMD was developed. The simulation results presented that the developed DDPG model can provide effective control algorithms for smart TMD for reduction of seismic responses.

저가 Redundant Manipulator의 최적 경로 생성을 위한 Deep Deterministic Policy Gradient(DDPG) 학습 (Learning Optimal Trajectory Generation for Low-Cost Redundant Manipulator using Deep Deterministic Policy Gradient(DDPG))

  • 이승현;진성호;황성현;이인호
    • 로봇학회논문지
    • /
    • 제17권1호
    • /
    • pp.58-67
    • /
    • 2022
  • In this paper, we propose an approach resolving inaccuracy of the low-cost redundant manipulator workspace with low encoder and low stiffness. When the manipulators are manufactured with low-cost encoders and low-cost links, the robots can run into workspace inaccuracy issues. Furthermore, trajectory generation based on conventional forward/inverse kinematics without taking into account inaccuracy issues will introduce the risk of end-effector fluctuations. Hence, we propose an optimization for the trajectory generation method based on the DDPG (Deep Deterministic Policy Gradient) algorithm for the low-cost redundant manipulators reaching the target position in Euclidean space. We designed the DDPG algorithm minimizing the distance along with the jacobian condition number. The training environment is selected with an error rate of randomly generated joint spaces in a simulator that implemented real-world physics, the test environment is a real robotic experiment and demonstrated our approach.

DDPG 알고리즘을 이용한 양팔 매니퓰레이터의 협동작업 경로상의 특이점 회피 경로 계획 (Singularity Avoidance Path Planning on Cooperative Task of Dual Manipulator Using DDPG Algorithm)

  • 이종학;김경수;김윤재;이장명
    • 로봇학회논문지
    • /
    • 제16권2호
    • /
    • pp.137-146
    • /
    • 2021
  • When controlling manipulator, degree of freedom is lost in singularity so specific joint velocity does not propagate to the end effector. In addition, control problem occurs because jacobian inverse matrix can not be calculated. To avoid singularity, we apply Deep Deterministic Policy Gradient(DDPG), algorithm of reinforcement learning that rewards behavior according to actions then determines high-reward actions in simulation. DDPG uses off-policy that uses 𝝐-greedy policy for selecting action of current time step and greed policy for the next step. In the simulation, learning is given by negative reward when moving near singulairty, and positive reward when moving away from the singularity and moving to target point. The reward equation consists of distance to target point and singularity, manipulability, and arrival flag. Dual arm manipulators hold long rod at the same time and conduct experiments to avoid singularity by simulated path. In the learning process, if object to be avoided is set as a space rather than point, it is expected that avoidance of obstacles will be possible in future research.

가상환경과 DDPG 알고리즘을 이용한 자율 비행체의 소노부이 최적 배치 연구 (Research on Optimal Deployment of Sonobuoy for Autonomous Aerial Vehicles Using Virtual Environment and DDPG Algorithm)

  • 김종인;한민석
    • 한국정보전자통신기술학회논문지
    • /
    • 제15권2호
    • /
    • pp.152-163
    • /
    • 2022
  • 본 논문에서는 대잠전의 필수 요소인 소노부이를 무인항공기가 최적의 배치로 투하할 수 있게 하는 방법을 제시한다. 이를 위해 Unity 게임엔진을 통해 음향 탐지 성능 분포도를 모사한 환경을 구성하고 Unity ML-Agents를 활용해 직접 구성한 환경과 외부에서 Python으로 작성한 강화학습 알고리즘이 서로 통신을 주고받으며 학습할 수 있게 하였다. 특히, 잘못된 행동이 누적되어 학습에 영향을 미치는 경우를 방지하고 비행체가 목표지점으로 최단 시간에 비행함과 동시에 소노부이가 최대 탐지 영역을 확보하기 위해 강화학습을 도입하고. 심층 확정적 정책 그래디언트(Deep Deterministic Policy Gradient: DDPG) 알고리즘을 적용하여 소노부이의 최적 배치를 달성하였다. 학습 결과 에이전트가 해역을 비행하며 70개의 타겟 후보들 중 최적 배치를 달성하기 위한 지점들만을 통과하였고 탐지 영역을 확보한 모습을 보면 겹치는 영역 없이 최단 거리에 있는 지점을 따라 비행하였음을 알 수 있다. 이는 최적 배치의 요건인 최단 시간, 최대 탐지 영역으로 소노부이를 배치하는 자율 비행체를 구현하였음을 의미한다.

Controller Learning Method of Self-driving Bicycle Using State-of-the-art Deep Reinforcement Learning Algorithms

  • Choi, Seung-Yoon;Le, Tuyen Pham;Chung, Tae-Choong
    • 한국컴퓨터정보학회논문지
    • /
    • 제23권10호
    • /
    • pp.23-31
    • /
    • 2018
  • Recently, there have been many studies on machine learning. Among them, studies on reinforcement learning are actively worked. In this study, we propose a controller to control bicycle using DDPG (Deep Deterministic Policy Gradient) algorithm which is the latest deep reinforcement learning method. In this paper, we redefine the compensation function of bicycle dynamics and neural network to learn agents. When using the proposed method for data learning and control, it is possible to perform the function of not allowing the bicycle to fall over and reach the further given destination unlike the existing method. For the performance evaluation, we have experimented that the proposed algorithm works in various environments such as fixed speed, random, target point, and not determined. Finally, as a result, it is confirmed that the proposed algorithm shows better performance than the conventional neural network algorithms NAF and PPO.

심층 결정론적 정책 경사법을 이용한 선박 충돌 회피 경로 결정 (Determination of Ship Collision Avoidance Path using Deep Deterministic Policy Gradient Algorithm)

  • 김동함;이성욱;남종호;요시타카 후루카와
    • 대한조선학회논문집
    • /
    • 제56권1호
    • /
    • pp.58-65
    • /
    • 2019
  • The stability, reliability and efficiency of a smart ship are important issues as the interest in an autonomous ship has recently been high. An automatic collision avoidance system is an essential function of an autonomous ship. This system detects the possibility of collision and automatically takes avoidance actions in consideration of economy and safety. In order to construct an automatic collision avoidance system using reinforcement learning, in this work, the sequential decision problem of ship collision is mathematically formulated through a Markov Decision Process (MDP). A reinforcement learning environment is constructed based on the ship maneuvering equations, and then the three key components (state, action, and reward) of MDP are defined. The state uses parameters of the relationship between own-ship and target-ship, the action is the vertical distance away from the target course, and the reward is defined as a function considering safety and economics. In order to solve the sequential decision problem, the Deep Deterministic Policy Gradient (DDPG) algorithm which can express continuous action space and search an optimal action policy is utilized. The collision avoidance system is then tested assuming the $90^{\circ}$intersection encounter situation and yields a satisfactory result.