• Title/Summary/Keyword: Reinforcement Learning (RL)

Search Result 62, Processing Time 0.027 seconds

Approximate Dynamic Programming Strategies and Their Applicability for Process Control: A Review and Future Directions

  • Lee, Jong-Min;Lee, Jay H.
    • International Journal of Control, Automation, and Systems
    • /
    • v.2 no.3
    • /
    • pp.263-278
    • /
    • 2004
  • This paper reviews dynamic programming (DP), surveys approximate solution methods for it, and considers their applicability to process control problems. Reinforcement Learning (RL) and Neuro-Dynamic Programming (NDP), which can be viewed as approximate DP techniques, are already established techniques for solving difficult multi-stage decision problems in the fields of operations research, computer science, and robotics. Owing to the significant disparity of problem formulations and objective, however, the algorithms and techniques available from these fields are not directly applicable to process control problems, and reformulations based on accurate understanding of these techniques are needed. We categorize the currently available approximate solution techniques fur dynamic programming and identify those most suitable for process control problems. Several open issues are also identified and discussed.

Evaluating SR-Based Reinforcement Learning Algorithm Under the Highly Uncertain Decision Task (불확실성이 높은 의사결정 환경에서 SR 기반 강화학습 알고리즘의 성능 분석)

  • Kim, So Hyeon;Lee, Jee Hang
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.8
    • /
    • pp.331-338
    • /
    • 2022
  • Successor representation (SR) is a model of human reinforcement learning (RL) mimicking the underlying mechanism of hippocampal cells constructing cognitive maps. SR utilizes these learned features to adaptively respond to the frequent reward changes. In this paper, we evaluated the performance of SR under the context where changes in latent variables of environments trigger the reward structure changes. For a benchmark test, we adopted SR-Dyna, an integration of SR into goal-driven Dyna RL algorithm in the 2-stage Markov Decision Task (MDT) in which we can intentionally manipulate the latent variables - state transition uncertainty and goal-condition. To precisely investigate the characteristics of SR, we conducted the experiments while controlling each latent variable that affects the changes in reward structure. Evaluation results showed that SR-Dyna could learn to respond to the reward changes in relation to the changes in latent variables, but could not learn rapidly in that situation. This brings about the necessity to build more robust RL models that can rapidly learn to respond to the frequent changes in the environment in which latent variables and reward structure change at the same time.

A Study of Reinforcement Learning-based Cyber Attack Prediction using Network Attack Simulator (NASim) (네트워크 공격 시뮬레이터를 이용한 강화학습 기반 사이버 공격 예측 연구)

  • Bum-Sok Kim;Jung-Hyun Kim;Min-Suk Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.112-118
    • /
    • 2023
  • As technology advances, the need for enhanced preparedness against cyber-attacks becomes an increasingly critical problem. Therefore, it is imperative to consider various circumstances and to prepare for cyber-attack strategic technology. This paper proposes a method to solve network security problems by applying reinforcement learning to cyber-security. In general, traditional static cyber-security methods have difficulty effectively responding to modern dynamic attack patterns. To address this, we implement cyber-attack scenarios such as 'Tiny Alpha' and 'Small Alpha' and evaluate the performance of various reinforcement learning methods using Network Attack Simulator, which is a cyber-attack simulation environment based on the gymnasium (formerly Open AI gym) interface. In addition, we experimented with different RL algorithms such as value-based methods (Q-Learning, Deep-Q-Network, and Double Deep-Q-Network) and policy-based methods (Actor-Critic). As a result, we observed that value-based methods with discrete action spaces consistently outperformed policy-based methods with continuous action spaces, demonstrating a performance difference ranging from a minimum of 20.9% to a maximum of 53.2%. This result shows that the scheme not only suggests opportunities for enhancing cybersecurity strategies, but also indicates potential applications in cyber-security education and system validation across a large number of domains such as military, government, and corporate sectors.

  • PDF

Stealthy Behavior Simulations Based on Cognitive Data (인지 데이터 기반의 스텔스 행동 시뮬레이션)

  • Choi, Taeyeong;Na, Hyeon-Suk
    • Journal of Korea Game Society
    • /
    • v.16 no.2
    • /
    • pp.27-40
    • /
    • 2016
  • Predicting stealthy behaviors plays an important role in designing stealth games. It is, however, difficult to automate this task because human players interact with dynamic environments in real time. In this paper, we present a reinforcement learning (RL) method for simulating stealthy movements in dynamic environments, in which an integrated model of Q-learning with Artificial Neural Networks (ANN) is exploited as an action classifier. Experiment results show that our simulation agent responds sensitively to dynamic situations and thus is useful for game level designer to determine various parameters for game.

Opportunistic Spectrum Access Based on a Constrained Multi-Armed Bandit Formulation

  • Ai, Jing;Abouzeid, Alhussein A.
    • Journal of Communications and Networks
    • /
    • v.11 no.2
    • /
    • pp.134-147
    • /
    • 2009
  • Tracking and exploiting instantaneous spectrum opportunities are fundamental challenges in opportunistic spectrum access (OSA) in presence of the bursty traffic of primary users and the limited spectrum sensing capability of secondary users. In order to take advantage of the history of spectrum sensing and access decisions, a sequential decision framework is widely used to design optimal policies. However, many existing schemes, based on a partially observed Markov decision process (POMDP) framework, reveal that optimal policies are non-stationary in nature which renders them difficult to calculate and implement. Therefore, this work pursues stationary OSA policies, which are thereby efficient yet low-complexity, while still incorporating many practical factors, such as spectrum sensing errors and a priori unknown statistical spectrum knowledge. First, with an approximation on channel evolution, OSA is formulated in a multi-armed bandit (MAB) framework. As a result, the optimal policy is specified by the wellknown Gittins index rule, where the channel with the largest Gittins index is always selected. Then, closed-form formulas are derived for the Gittins indices with tunable approximation, and the design of a reinforcement learning algorithm is presented for calculating the Gittins indices, depending on whether the Markovian channel parameters are available a priori or not. Finally, the superiority of the scheme is presented via extensive experiments compared to other existing schemes in terms of the quality of policies and optimality.

Random Balance between Monte Carlo and Temporal Difference in off-policy Reinforcement Learning for Less Sample-Complexity (오프 폴리시 강화학습에서 몬테 칼로와 시간차 학습의 균형을 사용한 적은 샘플 복잡도)

  • Kim, Chayoung;Park, Seohee;Lee, Woosik
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.1-7
    • /
    • 2020
  • Deep neural networks(DNN), which are used as approximation functions in reinforcement learning (RN), theoretically can be attributed to realistic results. In empirical benchmark works, time difference learning (TD) shows better results than Monte-Carlo learning (MC). However, among some previous works show that MC is better than TD when the reward is very rare or delayed. Also, another recent research shows when the information observed by the agent from the environment is partial on complex control works, it indicates that the MC prediction is superior to the TD-based methods. Most of these environments can be regarded as 5-step Q-learning or 20-step Q-learning, where the experiment continues without long roll-outs for alleviating reduce performance degradation. In other words, for networks with a noise, a representative network that is regardless of the controlled roll-outs, it is better to learn MC, which is robust to noisy rewards than TD, or almost identical to MC. These studies provide a break with that TD is better than MC. These recent research results show that the way combining MC and TD is better than the theoretical one. Therefore, in this study, based on the results shown in previous studies, we attempt to exploit a random balance with a mixture of TD and MC in RL without any complicated formulas by rewards used in those studies do. Compared to the DQN using the MC and TD random mixture and the well-known DQN using only the TD-based learning, we demonstrate that a well-performed TD learning are also granted special favor of the mixture of TD and MC through an experiments in OpenAI Gym.

Robot locomotion via IRPO based Actor-Critic Learning Method (IRPO 기반 Actor-Critic 학습 기법을 이용한 로봇이동)

  • Kim, Jong-Ho;Kang, Dae-Sung;Park, Joo-Young
    • Proceedings of the KIEE Conference
    • /
    • 2005.07d
    • /
    • pp.2933-2935
    • /
    • 2005
  • The IRPO(Intensive Randomized Policy Optimizer) algorithm is a recently developed tool in the area of reinforcement leaming. And it has been shown to be very successful in several application problems. To compare with a general RL method, IRPO has some difference in that policy utilizes the entire history of agent -environment interaction. The policy is derived from the history directly, not through any kind of a model of the environment. In this paper, we consider a robot-control problem utilizing a IRPO algorithm. We also developed a MATLAH-based animation program, by which the effectiveness of the training algorithms were observed.

  • PDF

Solving POMDP problem using Self-organizing state RL (상태 조직화 강화학습을 사용한 POMDP 문제 해결)

  • 이승준;장병탁
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2001.05a
    • /
    • pp.73-77
    • /
    • 2001
  • 본 논문에서는 부분적으로 관측 가능한 환경에서 사전의 모델 정보 없이 확률적인 행동 정책을 학습하는 상태 조직화 강화 학습 모델을 제안한다. 기존의 강화학습은 환경 모델을 사전에 필요로 하고 상태 전체의 관측이 필요하기 때문에 학습 이전에 문제에 대해 알아야 한다는 제약이 있다. 또한 작은 문제에 대해서는 잘 적용되지만 상태의 수가 매우 많고 부분적으로만 관측한 경우가 많은 실제 문제에는 그대로 적용하기가 불가능하다. 이러한 두 가지 단점을 해결하기 위해 본 논문에서는 사전의 모델 정보 없이 부분적인 관측값으로부터 상태와 행동 정책을 동시에 학습해 나가는 강화 학습 모델을 제안하고, 제안된 방법을 부분적으로만 관측이 가능한 미로 탐색 문제에 적용하였다.

  • PDF

Challenges of diet planning for children using artificial intelligence

  • Changhun, Lee;Soohyeok, Kim;Jayun, Kim;Chiehyeon, Lim;Minyoung, Jung
    • Nutrition Research and Practice
    • /
    • v.16 no.6
    • /
    • pp.801-812
    • /
    • 2022
  • BACKGROUND/OBJECTIVES: Diet planning in childcare centers is difficult because of the required knowledge of nutrition and development as well as the high design complexity associated with large numbers of food items. Artificial intelligence (AI) is expected to provide diet-planning solutions via automatic and effective application of professional knowledge, addressing the complexity of optimal diet design. This study presents the results of the evaluation of the utility of AI-generated diets for children and provides related implications. MATERIALS/METHODS: We developed 2 AI solutions for children aged 3-5 yrs using a generative adversarial network (GAN) model and a reinforcement learning (RL) framework. After training these solutions to produce daily diet plans, experts evaluated the human- and AI-generated diets in 2 steps. RESULTS: In the evaluation of adequacy of nutrition, where experts were provided only with nutrient information and no food names, the proportion of strong positive responses to RL-generated diets was higher than that of the human- and GAN-generated diets (P < 0.001). In contrast, in terms of diet composition, the experts' responses to human-designed diets were more positive when experts were provided with food name information (i.e., composition information). CONCLUSIONS: To the best of our knowledge, this is the first study to demonstrate the development and evaluation of AI to support dietary planning for children. This study demonstrates the possibility of developing AI-assisted diet planning methods for children and highlights the importance of composition compliance in diet planning. Further integrative cooperation in the fields of nutrition, engineering, and medicine is needed to improve the suitability of our proposed AI solutions and benefit children's well-being by providing high-quality diet planning in terms of both compositional and nutritional criteria.

Grasping a Target Object in Clutter with an Anthropomorphic Robot Hand via RGB-D Vision Intelligence, Target Path Planning and Deep Reinforcement Learning (RGB-D 환경인식 시각 지능, 목표 사물 경로 탐색 및 심층 강화학습에 기반한 사람형 로봇손의 목표 사물 파지)

  • Ryu, Ga Hyeon;Oh, Ji-Heon;Jeong, Jin Gyun;Jung, Hwanseok;Lee, Jin Hyuk;Lopez, Patricio Rivera;Kim, Tae-Seong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.9
    • /
    • pp.363-370
    • /
    • 2022
  • Grasping a target object among clutter objects without collision requires machine intelligence. Machine intelligence includes environment recognition, target & obstacle recognition, collision-free path planning, and object grasping intelligence of robot hands. In this work, we implement such system in simulation and hardware to grasp a target object without collision. We use a RGB-D image sensor to recognize the environment and objects. Various path-finding algorithms been implemented and tested to find collision-free paths. Finally for an anthropomorphic robot hand, object grasping intelligence is learned through deep reinforcement learning. In our simulation environment, grasping a target out of five clutter objects, showed an average success rate of 78.8%and a collision rate of 34% without path planning. Whereas our system combined with path planning showed an average success rate of 94% and an average collision rate of 20%. In our hardware environment grasping a target out of three clutter objects showed an average success rate of 30% and a collision rate of 97% without path planning whereas our system combined with path planning showed an average success rate of 90% and an average collision rate of 23%. Our results show that grasping a target object in clutter is feasible with vision intelligence, path planning, and deep RL.