• 제목/요약/키워드: soft actor-critic

검색결과 6건 처리시간 0.021초

분포형 강화학습을 활용한 맵리스 네비게이션 (Mapless Navigation with Distributional Reinforcement Learning)

  • 짠 반 마잉;김곤우
    • 로봇학회논문지
    • /
    • 제19권1호
    • /
    • pp.92-97
    • /
    • 2024
  • This paper provides a study of distributional perspective on reinforcement learning for application in mobile robot navigation. Mapless navigation algorithms based on deep reinforcement learning are proven to promising performance and high applicability. The trial-and-error simulations in virtual environments are encouraged to implement autonomous navigation due to expensive real-life interactions. Nevertheless, applying the deep reinforcement learning model in real tasks is challenging due to dissimilar data collection between virtual simulation and the physical world, leading to high-risk manners and high collision rate. In this paper, we present distributional reinforcement learning architecture for mapless navigation of mobile robot that adapt the uncertainty of environmental change. The experimental results indicate the superior performance of distributional soft actor critic compared to conventional methods.

Deep reinforcement learning for a multi-objective operation in a nuclear power plant

  • Junyong Bae;Jae Min Kim;Seung Jun Lee
    • Nuclear Engineering and Technology
    • /
    • 제55권9호
    • /
    • pp.3277-3290
    • /
    • 2023
  • Nuclear power plant (NPP) operations with multiple objectives and devices are still performed manually by operators despite the potential for human error. These operations could be automated to reduce the burden on operators; however, classical approaches may not be suitable for these multi-objective tasks. An alternative approach is deep reinforcement learning (DRL), which has been successful in automating various complex tasks and has been applied in automation of certain operations in NPPs. But despite the recent progress, previous studies using DRL for NPP operations have limitations to handle complex multi-objective operations with multiple devices efficiently. This study proposes a novel DRL-based approach that addresses these limitations by employing a continuous action space and straightforward binary rewards supported by the adoption of a soft actor-critic and hindsight experience replay. The feasibility of the proposed approach was evaluated for controlling the pressure and volume of the reactor coolant while heating the coolant during NPP startup. The results show that the proposed approach can train the agent with a proper strategy for effectively achieving multiple objectives through the control of multiple devices. Moreover, hands-on testing results demonstrate that the trained agent is capable of handling untrained objectives, such as cooldown, with substantial success.

Strategy to coordinate actions through a plant parameter prediction model during startup operation of a nuclear power plant

  • Jae Min Kim;Junyong Bae;Seung Jun Lee
    • Nuclear Engineering and Technology
    • /
    • 제55권3호
    • /
    • pp.839-849
    • /
    • 2023
  • The development of automation technology to reduce human error by minimizing human intervention is accelerating with artificial intelligence and big data processing technology, even in the nuclear field. Among nuclear power plant operation modes, the startup and shutdown operations are still performed manually and thus have the potential for human error. As part of the development of an autonomous operation system for startup operation, this paper proposes an action coordinating strategy to obtain the optimal actions. The lower level of the system consists of operating blocks that are created by analyzing the operation tasks to achieve local goals through soft actor-critic algorithms. However, when multiple agents try to perform conflicting actions, a method is needed to coordinate them, and for this, an action coordination strategy was developed in this work as the upper level of the system. Three quantification methods were compared and evaluated based on the future plant state predicted by plant parameter prediction models using long short-term memory networks. Results confirmed that the optimal action to satisfy the limiting conditions for operation can be selected by coordinating the action sets. It is expected that this methodology can be generalized through future research.

시연에 의해 유도된 탐험을 통한 시각 기반의 물체 조작 (Visual Object Manipulation Based on Exploration Guided by Demonstration)

  • 김두준;조현준;송재복
    • 로봇학회논문지
    • /
    • 제17권1호
    • /
    • pp.40-47
    • /
    • 2022
  • A reward function suitable for a task is required to manipulate objects through reinforcement learning. However, it is difficult to design the reward function if the ample information of the objects cannot be obtained. In this study, a demonstration-based object manipulation algorithm called stochastic exploration guided by demonstration (SEGD) is proposed to solve the design problem of the reward function. SEGD is a reinforcement learning algorithm in which a sparse reward explorer (SRE) and an interpolated policy using demonstration (IPD) are added to soft actor-critic (SAC). SRE ensures the training of the critic of SAC by collecting prior data and IPD limits the exploration space by making SEGD's action similar to the expert's action. Through these two algorithms, the SEGD can learn only with the sparse reward of the task without designing the reward function. In order to verify the SEGD, experiments were conducted for three tasks. SEGD showed its effectiveness by showing success rates of more than 96.5% in these experiments.

종방향 주행성능향상을 위한 Latent SAC 강화학습 보상함수 설계 (On the Reward Function of Latent SAC Reinforcement Learning to Improve Longitudinal Driving Performance)

  • 조성빈;정한유
    • 전기전자학회논문지
    • /
    • 제25권4호
    • /
    • pp.728-734
    • /
    • 2021
  • 최근 심층강화학습을 활용한 종단간 자율주행에 대한 관심이 크게 증가하고 있다. 본 논문에서는 차량의 종방향 주행 성능을 개선하는 잠재 SAC 기반 심층강화학습의 보상함수를 제시한다. 기존 강화학습 보상함수는 주행 안전성과 효율성이 크게 저하되는 반면 제시하는 보상함수는 전방 차량과의 충돌위험을 회피하면서 적절한 차간거리를 유지할 수 있음을 보인다.

SAC 강화 학습을 통한 스마트 그리드 효율성 향상: CityLearn 환경에서 재생 에너지 통합 및 최적 수요 반응 (Enhancing Smart Grid Efficiency through SAC Reinforcement Learning: Renewable Energy Integration and Optimal Demand Response in the CityLearn Environment)

  • 이자노브 알리벡 러스타모비치;성승제;임창균
    • 한국전자통신학회논문지
    • /
    • 제19권1호
    • /
    • pp.93-104
    • /
    • 2024
  • 수요 반응은 전력망의 신뢰성을 높이고 비용을 최소화하기 위해 수요가 가장 많은 시간대에 고객이 소비패턴을 조정하도록 유도한다. 재생 에너지원을 스마트 그리드에 통합하는 것은 간헐적이고 예측할 수 없는 특성으로 인해 상당한 도전 과제를 안고 있다. 강화 학습 기법과 결합된 수요 대응 전략은 이러한 문제를 해결하고 기존 방식에서는 이러한 종류의 복잡한 요구 사항을 충족하지 못하는 경우 그리드 운영을 최적화할 수 있는 접근 방식으로 부상하고 있다. 본 연구는 재생 에너지 통합을 위한 수요 반응에 강화 학습 알고리즘을 적용하는 방법을 찾아 적용하는데 중점을 둔다. 연구의 핵심 목표는 수요 측 유연성을 최적화하고 재생 에너지 활용도를 개선할 뿐 아니라 그리드 안정성을 강화하고자 한다. 연구 결과는 강화 학습을 기반으로 한 수요 반응 전략이 그리드 유연성을 향상시키고 재생 에너지 통합을 촉진하는 데 효과적이라것을 보여준다.