• Title/Summary/Keyword: Value-based reinforcement

Search Result 165, Processing Time 0.025 seconds

A Study of Reinforcement Learning-based Cyber Attack Prediction using Network Attack Simulator (NASim) (네트워크 공격 시뮬레이터를 이용한 강화학습 기반 사이버 공격 예측 연구)

  • Bum-Sok Kim;Jung-Hyun Kim;Min-Suk Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.112-118
    • /
    • 2023
  • As technology advances, the need for enhanced preparedness against cyber-attacks becomes an increasingly critical problem. Therefore, it is imperative to consider various circumstances and to prepare for cyber-attack strategic technology. This paper proposes a method to solve network security problems by applying reinforcement learning to cyber-security. In general, traditional static cyber-security methods have difficulty effectively responding to modern dynamic attack patterns. To address this, we implement cyber-attack scenarios such as 'Tiny Alpha' and 'Small Alpha' and evaluate the performance of various reinforcement learning methods using Network Attack Simulator, which is a cyber-attack simulation environment based on the gymnasium (formerly Open AI gym) interface. In addition, we experimented with different RL algorithms such as value-based methods (Q-Learning, Deep-Q-Network, and Double Deep-Q-Network) and policy-based methods (Actor-Critic). As a result, we observed that value-based methods with discrete action spaces consistently outperformed policy-based methods with continuous action spaces, demonstrating a performance difference ranging from a minimum of 20.9% to a maximum of 53.2%. This result shows that the scheme not only suggests opportunities for enhancing cybersecurity strategies, but also indicates potential applications in cyber-security education and system validation across a large number of domains such as military, government, and corporate sectors.

  • PDF

A Motivation-Based Action-Selection-Mechanism Involving Reinforcement Learning

  • Lee, Sang-Hoon;Suh, Il-Hong;Kwon, Woo-Young
    • International Journal of Control, Automation, and Systems
    • /
    • v.6 no.6
    • /
    • pp.904-914
    • /
    • 2008
  • An action-selection-mechanism(ASM) has been proposed to work as a fully connected finite state machine to deal with sequential behaviors as well as to allow a state in the task program to migrate to any state in the task, in which a primitive node in association with a state and its transitional conditions can be easily inserted/deleted. Also, such a primitive node can be learned by a shortest path-finding-based reinforcement learning technique. Specifically, we define a behavioral motivation as having state-dependent value as a primitive node for action selection, and then sequentially construct a network of behavioral motivations in such a way that the value of a parent node is allowed to flow into a child node by a releasing mechanism. A vertical path in a network represents a behavioral sequence. Here, such a tree for our proposed ASM can be newly generated and/or updated whenever a new behavior sequence is learned. To show the validity of our proposed ASM, experimental results of a mobile robot performing the task of pushing- a- box-in to- a-goal(PBIG) will be illustrated.

Prediction Technique of Energy Consumption based on Reinforcement Learning in Microgrids (마이크로그리드에서 강화학습 기반 에너지 사용량 예측 기법)

  • Sun, Young-Ghyu;Lee, Jiyoung;Kim, Soo-Hyun;Kim, Soohwan;Lee, Heung-Jae;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.3
    • /
    • pp.175-181
    • /
    • 2021
  • This paper analyzes the artificial intelligence-based approach for short-term energy consumption prediction. In this paper, we employ the reinforcement learning algorithms to improve the limitation of the supervised learning algorithms which usually utilize to the short-term energy consumption prediction technologies. The supervised learning algorithm-based approaches have high complexity because the approaches require contextual information as well as energy consumption data for sufficient performance. We propose a deep reinforcement learning algorithm based on multi-agent to predict energy consumption only with energy consumption data for improving the complexity of data and learning models. The proposed scheme is simulated using public energy consumption data and confirmed the performance. The proposed scheme can predict a similar value to the actual value except for the outlier data.

Methodology To Prevent Local Optima And Improve Optimization Performance For Time-Cost Optimization Of Reinforcement-Learning Based Construction Schedule Simulation

  • Jeseop Rhie;Minseo Jang;Do Hyoung Shin;Hyungseo Han;Seungwoo Lee
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.769-774
    • /
    • 2024
  • The availability of PMT(Project Management Tool) in the market has been increasing rapidly in recent years and Significant advancements have been made for project managers to use for planning, monitoring, and control. Recently, studies applying the Reinforcement-Learning Based Construction Schedule Simulation algorithm for construction project process planning/management are increasing. When reinforcement learning is applied, the agent recognizes the current state and learns to select the action that maximizes the reward among selectable actions. However, if the action of global optimal points is not selected in simulation selection, the local optimal resource may receive continuous compensation (+), which may result in failure to reach the global optimal point. In addition, there is a limitation that the optimization time can be long as numerous iterations are required to reach the global optimal point. Therefore, this study presented a method to improve optimization performance by increasing the probability that a resource with high productivity and low unit cost is selected, preventing local optimization, and reducing the number of iterations required to reach the global optimal point. In the performance evaluation process, we demonstrated that this method leads to closer approximation to the optimal value with fewer iterations.

Barycentric Approximator for Reinforcement Learning Control

  • Whang Cho
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.3 no.1
    • /
    • pp.33-42
    • /
    • 2002
  • Recently, various experiments to apply reinforcement learning method to the self-learning intelligent control of continuous dynamic system have been reported in the machine learning related research community. The reports have produced mixed results of some successes and some failures, and show that the success of reinforcement learning method in application to the intelligent control of continuous control systems depends on the ability to combine proper function approximation method with temporal difference methods such as Q-learning and value iteration. One of the difficulties in using function approximation method in connection with temporal difference method is the absence of guarantee for the convergence of the algorithm. This paper provides a proof of convergence of a particular function approximation method based on \"barycentric interpolator\" which is known to be computationally more efficient than multilinear interpolation .

Cloud Task Scheduling Based on Proximal Policy Optimization Algorithm for Lowering Energy Consumption of Data Center

  • Yang, Yongquan;He, Cuihua;Yin, Bo;Wei, Zhiqiang;Hong, Bowei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.1877-1891
    • /
    • 2022
  • As a part of cloud computing technology, algorithms for cloud task scheduling place an important influence on the area of cloud computing in data centers. In our earlier work, we proposed DeepEnergyJS, which was designed based on the original version of the policy gradient and reinforcement learning algorithm. We verified its effectiveness through simulation experiments. In this study, we used the Proximal Policy Optimization (PPO) algorithm to update DeepEnergyJS to DeepEnergyJSV2.0. First, we verify the convergence of the PPO algorithm on the dataset of Alibaba Cluster Data V2018. Then we contrast it with reinforcement learning algorithm in terms of convergence rate, converged value, and stability. The results indicate that PPO performed better in training and test data sets compared with reinforcement learning algorithm, as well as other general heuristic algorithms, such as First Fit, Random, and Tetris. DeepEnergyJSV2.0 achieves better energy efficiency than DeepEnergyJS by about 7.814%.

Solving Continuous Action/State Problem in Q-Learning Using Extended Rule Based Fuzzy Inference System

  • Kim, Min-Soeng;Lee, Ju-Jang
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.3 no.3
    • /
    • pp.170-175
    • /
    • 2001
  • Q-learning is a kind of reinforcement learning where the agent solves the given task based on rewards received from the environment. Most research done in the field of Q-learning has focused on discrete domains, although the environment with which the agent must interact is generally continuous. Thus we need to devise some methods that enable Q-learning to be applicable to the continuous problem domain. In this paper, an extended fuzzy rule is proposed so that it can incorporate Q-learning. The interpolation technique, which is widely used in memory-based learning, is adopted to represent the appropriate Q value for current state and action pair in each extended fuzzy rule. The resulting structure based on the fuzzy inference system has the capability of solving the continuous state about the environment. The effectiveness of the proposed structure is shown through simulation on the cart-pole system.

  • PDF

Neural-Fuzzy Controller Based on Reinforcement Learning (강화 학습에 기반한 뉴럴-퍼지 제어기)

  • 박영철;김대수;심귀보
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2000.05a
    • /
    • pp.245-248
    • /
    • 2000
  • In this paper we improve the performance of autonomous mobile robot by induction of reinforcement learning concept. Generally, the system used in this paper is divided into two part. Namely, one is neural-fuzzy and the other is dynamic recurrent neural networks. Neural-fuzzy determines the next action of robot. Also, the neural-fuzzy is determined to optimal action internal reinforcement from dynamic recurrent neural network. Dynamic recurrent neural network evaluated to determine action of neural-fuzzy by external reinforcement signal from environment, Besides, dynamic recurrent neural network weight determined to internal reinforcement signal value is evolved by genetic algorithms. The architecture of propose system is applied to the computer simulations on controlling autonomous mobile robot.

  • PDF

Bending and Shear Capacity of Reinforced Concrete Protective Wall (휨과 전단을 고려한 철근콘크리트 방호벽 성능에 관한 연구)

  • Young Beom Kwon;Jong Yil Park
    • Journal of the Korean Society of Safety
    • /
    • v.38 no.2
    • /
    • pp.44-51
    • /
    • 2023
  • With the recent increase in gas energy use, risk management for explosion accidents has been emphasized. Protective walls can be used to reduce damage from explosions. The KOSHA GUIDE D-65-2018 suggests the minimum thickness and height of protective walls, minimum reinforcement diameter, and maximum spacing of reinforcements for the structural safety of the protective walls. However, no related evidence has been presented. In this study, the blast load carrying capacity of the protective wall was analyzed by the pressure-impulse diagrams while changing the yield strength of the reinforcement, concrete compressive strength, reinforcement ratio, protective wall height, and thickness, to check the adequacy of the KOSHA GUIDE. Results show that failure may occur even with design based on the criteria presented by KOSHA GUIDE. In order to achieve structural safety of protective walls, additional criteria for minimum reinforcement yield strength and maximum height of protective wall are suggested for inclusion in KOSHA GUIDE. Moreover, the existing value for minimum reinforcement ratio and the thickness of the protective wall should be increased.

A Comparative Study on the Impermeability-reinforcement Performance of Old Reservoir from Injection and Deep Mixing Method through Laboratory Model Test (실내모형시험을 통한 지반혼합 및 주입공법의 노후저수지 차수 보강성능 비교 연구)

  • Song, Sang-Huwon
    • Journal of the Korean Institute of Rural Architecture
    • /
    • v.24 no.2
    • /
    • pp.45-52
    • /
    • 2022
  • Of the 17,106 domestic reservoirs(as of December 2020), 14,611 are older than 50 years, and these old reservoirs will gradually increase over time. The injection grouting method is most applied to the reinforcement method of the aging reservoir. However, the injection grouting method is not accurate in uniformity and reinforced area. An laboratory model test was conducted to evaluate the applicability of the deep mixing method, which compensated for these shortcomings, as a reservoir reinforcement method. As a result of calculating the hydraulic conductiveity for each method through the model test results, the injection grouting method was calculated as a hydraulic conductiveity value that was about 7.5 times larger than that of the deep mixing method. As a result of measuring the water level change in the laboratory model test, it was found that the water level change decreased in the injection method and deep mixing method compared to the non-reinforcement method. In addition, deep mixing method showed a water level change of about 15% based on 40 hours compared to the injection method, indicating that the water-reducing effect was superior to that of the injection method.