• Title/Summary/Keyword: Value-based reinforcement

Search Result 160, Processing Time 0.02 seconds

L-CAA : An Architecture for Behavior-Based Reinforcement Learning (L-CAA : 행위 기반 강화학습 에이전트 구조)

  • Hwang, Jong-Geun;Kim, In-Cheol
    • Journal of Intelligence and Information Systems
    • /
    • v.14 no.3
    • /
    • pp.59-76
    • /
    • 2008
  • In this paper, we propose an agent architecture called L-CAA that is quite effective in real-time dynamic environments. L-CAA is an extension of CAA, the behavior-based agent architecture which was also developed by our research group. In order to improve adaptability to the changing environment, it is extended by adding reinforcement learning capability. To obtain stable performance, however, behavior selection and execution in the L-CAA architecture do not entirely rely on learning. In L-CAA, learning is utilized merely as a complimentary means for behavior selection and execution. Behavior selection mechanism in this architecture consists of two phases. In the first phase, the behaviors are extracted from the behavior library by checking the user-defined applicable conditions and utility of each behavior. If multiple behaviors are extracted in the first phase, the single behavior is selected to execute in the help of reinforcement learning in the second phase. That is, the behavior with the highest expected reward is selected by comparing Q values of individual behaviors updated through reinforcement learning. L-CAA can monitor the maintainable conditions of the executing behavior and stop immediately the behavior when some of the conditions fail due to dynamic change of the environment. Additionally, L-CAA can suspend and then resume the current behavior whenever it encounters a higher utility behavior. In order to analyze effectiveness of the L-CAA architecture, we implement an L-CAA-enabled agent autonomously playing in an Unreal Tournament game that is a well-known dynamic virtual environment, and then conduct several experiments using it.

  • PDF

Card Battle Game Agent Based on Reinforcement Learning with Play Level Control (플레이 수준 조절이 가능한 강화학습 기반 카드형 대전 게임 에이전트)

  • Yong Cheol Lee;Chill woo Lee
    • Smart Media Journal
    • /
    • v.13 no.2
    • /
    • pp.32-43
    • /
    • 2024
  • Game agents which are behavioral agent for game playing are a crucial component of game satisfaction. However it takes a lot of time and effort to create game agents for various game levels, environments, and players. In addition, when the game environment changes such as adding contents or updating characters, new game agents need to be developed and the development difficulty gradually increases. And it is important to have a game agent that can be customized for different levels of players. This is because a game agent that can play games of various levels is more useful and can increase the satisfaction of more players than a high-level game agent. In this paper, we propose a method for learning and controlling the level of play of game agents that can be rapidly developed and fine-tuned for various game environments and changes. At this time, reinforcement learning applies a policy-based distributed reinforcement learning method IMPALA for flexible processing and fast learning of various behavioral structures. Once reinforcement learning is complete, we choose actions by sampling based on Softmax-Temperature method. From this result, we show that the game agent's play level decreases as the Temperature value increases. This shows that it is possible to easily control the play level.

Differentially Responsible Adaptive Critic Learning ( DRACL ) for the Self-Learning Control of Multiple-Input System (多入力 시스템의 자율학습제어를 위한 차등책임 적응비평학습)

  • Kim, Hyong-Suk
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.2
    • /
    • pp.28-37
    • /
    • 1999
  • Differentially Responsible Adaptive Critic Learning technique is proposed for learning the control technique with multiple control inputs as in robot system using reinforcement learning. The reinforcement learning is a self-learning technique which learns the control skill based on the critic information Learning is a after a long series of control actions. The Adaptive Critic Learning (ACL) is the representative reinforcement learning structure. The ACL maximizes the learning performance using the two learning modules called the action and the critic modules which exploit the external critic value obtained seldomly. Drawback of the ACL is the fact that application of the ACL is limited to the single input system. In the proposed Differentially Responsible Action Dependant Adaptive Critic learning structure, the critic function is constructed as a function of control input elements. The responsibility of the individual control action element is computed based on the partial derivative of the critic function in terms of each control action element. The proposed learning structure has been constructed with the CMAC neural networks and some simulations have been done upon the two dimensional Cart-Role system and robot squatting problem. The simulation results are included.

  • PDF

Analysis and study of Deep Reinforcement Learning based Resource Allocation for Renewable Powered 5G Ultra-Dense Networks

  • Hamza Ali Alshawabkeh
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.1
    • /
    • pp.226-234
    • /
    • 2024
  • The frequent handover problem and playing ping-pong effects in 5G (5th Generation) ultra-dense networking cannot be effectively resolved by the conventional handover decision methods, which rely on the handover thresholds and measurement reports. For instance, millimetre-wave LANs, broadband remote association techniques, and 5G/6G organizations are instances of group of people yet to come frameworks that request greater security, lower idleness, and dependable principles and correspondence limit. One of the critical parts of 5G and 6G innovation is believed to be successful blockage the board. With further developed help quality, it empowers administrator to run many systems administration recreations on a solitary association. To guarantee load adjusting, forestall network cut disappointment, and give substitute cuts in case of blockage or cut frustration, a modern pursuing choices framework to deal with showing up network information is require. Our goal is to balance the strain on BSs while optimizing the value of the information that is transferred from satellites to BSs. Nevertheless, due to their irregular flight characteristic, some satellites frequently cannot establish a connection with Base Stations (BSs), which further complicates the joint satellite-BS connection and channel allocation. SF redistribution techniques based on Deep Reinforcement Learning (DRL) have been devised, taking into account the randomness of the data received by the terminal. In order to predict the best capacity improvements in the wireless instruments of 5G and 6G IoT networks, a hybrid algorithm for deep learning is being used in this study. To control the level of congestion within a 5G/6G network, the suggested approach is put into effect to a training set. With 0.933 accuracy and 0.067 miss rate, the suggested method produced encouraging results.

Optimized AntNet-Based Routing for Network Processors (네트워크 프로세서에 적합한 개선된 AntNet기반 라우팅 최적화기법)

  • Park Hyuntae;Bae Sung-il;Ahn Jin-Ho;Kang Sungho
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.42 no.5 s.335
    • /
    • pp.29-38
    • /
    • 2005
  • In this paper, a new modified and optimized AntNet algorithm which can be implemented efficiently onto network processor is proposed. The AntNet that mimics the activities of the social insect is an adaptive agent-based routing algorithm. This method requires a complex arithmetic calculating system. However, since network processors have simple arithmetic units for a packet processing, it is very difficult to implement the original AntNet algorithm on network processors. Therefore, the proposed AntNet algorithm is a solution of this problem by decreasing arithmetic executing cycles for calculating a reinforcement value without loss of the adaptive performance. The results of the simulations show that the proposed algorithm is more suitable and efficient than the original AntNet algorithm for commercial network processors.

A Study on Cathodic Protection Rectifier Control of City Gas Pipes using Deep Learning (딥러닝을 활용한 도시가스배관의 전기방식(Cathodic Protection) 정류기 제어에 관한 연구)

  • Hyung-Min Lee;Gun-Tek Lim;Guy-Sun Cho
    • Journal of the Korean Institute of Gas
    • /
    • v.27 no.2
    • /
    • pp.49-56
    • /
    • 2023
  • As AI (Artificial Intelligence)-related technologies are highly developed due to the 4th industrial revolution, cases of applying AI in various fields are increasing. The main reason is that there are practical limits to direct processing and analysis of exponentially increasing data as information and communication technology develops, and the risk of human error can be reduced by applying new technologies. In this study, after collecting the data received from the 'remote potential measurement terminal (T/B, Test Box)' and the output of the 'remote rectifier' at that time, AI was trained. AI learning data was obtained through data augmentation through regression analysis of the initially collected data, and the learning model applied the value-based Q-Learning model among deep reinforcement learning (DRL) algorithms. did The AI that has completed data learning is put into the actual city gas supply area, and based on the received remote T/B data, it is verified that the AI responds appropriately, and through this, AI can be used as a suitable means for electricity management in the future. want to verify.

Static Behavior of Reinforced Railway Roadbed by Geotextile Bag (지오텍스타일 백으로 보강된 철도노반의 정적거동 분석)

  • Lee, Dong-Hyun;Shin, Eun-Chul
    • Journal of the Korean Society for Railway
    • /
    • v.9 no.2 s.33
    • /
    • pp.180-186
    • /
    • 2006
  • In this study, a large-scale laboratory model test, 2-D and 3-D numerical analyses were conducted to verify the reinforcement effect by utilizing geotextile bag on the railway roadbed. Static loading which simulated train load was applied on the geotextile bag-reinforced railway roadbed and also unreinforced railway roadbed, Computer program named Pentagon which is a part of FEM programs was used in the numerical analysis. Based on the results of laboratory test, 2-D and 3-D numerical analyses, the effect of load distribution and settlement reduction was found to be depending on the geotextile characteristics, tensile strength of geotextite, and interface friction angle between geotextile bags. In general, the result of 2-D and 3-D numerical analyses shows lower value than that of laboratory test. Also, the result of 3-D numerical analyses shows lower value than that of 2-D numerical analyses because of its stress transfer effect.

FE analysis of RC structures using DSC model with yield surfaces for tension and compression

  • Akhaveissy, A.H.;Desai, C.S.;Mostofinejad, D.;Vafai, A.
    • Computers and Concrete
    • /
    • v.11 no.2
    • /
    • pp.123-148
    • /
    • 2013
  • The nonlinear finite element method with eight noded isoparametric quadrilateral element for concrete and two noded element for reinforcement is used for the prediction of the behavior of reinforcement concrete structures. The disturbed state concept (DSC) including the hierarchical single surface (HISS) plasticity model with associated flow rule with modifications is used to characterize the constitutive behavior of concrete both in compression and in tension which is named DSC/HISS-CT. The HISS model is applied to shows the plastic behavior of concrete, and DSC for microcracking, fracture and softening simulations of concrete. It should be noted that the DSC expresses the behavior of a material element as a mixture of two interacting components and can include both softening and stiffening, while the classical damage approach assumes that cracks (damage) induced in a material treated acts as a void, with no strength. The DSC/HISS-CT is a unified model with different mechanism, which expresses the observed behavior in terms of interacting behavior of components; thus the mechanism in the DSC is much different than that of the damage model, which is based on physical cracks which has no strength and interaction with the undamaged part. This is the first time the DSC/HISS-CT model, with the capacity to account for both compression and tension yields, is applied for concrete materials. The DSC model allows also for the characterization of non-associative behavior through the use of disturbance. Elastic perfectly plastic behavior is assumed for modeling of steel reinforcement. The DSC model is validated at two levels: (1) specimen and (2) practical boundary value problem. For the specimen level, the predictions are obtained by the integration of the incremental constitutive relations. The FE procedure with DSC/HISS-CT model is used to obtain predictions for practical boundary value problems. Based on the comparisons between DSC/HISS-CT predictions, test data and ANSYS software predictions, it is found that the model provides highly satisfactory predictions. The model allows computation of microcracking during deformation leading to the fracture and failure; in the model, the critical disturbance, Dc, identifies fracture and failure.

Study for Feature Selection Based on Multi-Agent Reinforcement Learning (다중 에이전트 강화학습 기반 특징 선택에 대한 연구)

  • Kim, Miin-Woo;Bae, Jin-Hee;Wang, Bo-Hyun;Lim, Joon-Shik
    • Journal of Digital Convergence
    • /
    • v.19 no.12
    • /
    • pp.347-352
    • /
    • 2021
  • In this paper, we propose a method for finding feature subsets that are effective for classification in an input dataset by using a multi-agent reinforcement learning method. In the field of machine learning, it is crucial to find features suitable for classification. A dataset may have numerous features; while some features may be effective for classification or prediction, others may have little or rather negative effects on results. In machine learning problems, feature selection for increasing classification or prediction accuracy is a critical problem. To solve this problem, we proposed a feature selection method based on reinforced learning. Each feature has one agent, which determines whether the feature is selected. After obtaining corresponding rewards for each feature that is selected, but not by the agents, the Q-value of each agent is updated by comparing the rewards. The reward comparison of the two subsets helps agents determine whether their actions were right. These processes are performed as many times as the number of episodes, and finally, features are selected. As a result of applying this method to the Wisconsin Breast Cancer, Spambase, Musk, and Colon Cancer datasets, accuracy improvements of 0.0385, 0.0904, 0.1252 and 0.2055 were shown, respectively, and finally, classification accuracies of 0.9789, 0.9311, 0.9691 and 0.9474 were achieved, respectively. It was proved that our proposed method could properly select features that were effective for classification and increase classification accuracy.

Development of Convolutional Network-based Denoising Technique using Deep Reinforcement Learning in Computed Tomography (심층강화학습을 이용한 Convolutional Network 기반 전산화단층영상 잡음 저감 기술 개발)

  • Cho, Jenonghyo;Yim, Dobin;Nam, Kibok;Lee, Dahye;Lee, Seungwan
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.7
    • /
    • pp.991-1001
    • /
    • 2020
  • Supervised deep learning technologies for improving the image quality of computed tomography (CT) need a lot of training data. When input images have different characteristics with training images, the technologies cause structural distortion in output images. In this study, an imaging model based on the deep reinforcement learning (DRL) was developed for overcoming the drawbacks of the supervised deep learning technologies and reducing noise in CT images. The DRL model was consisted of shared, value and policy networks, and the networks included convolutional layers, rectified linear unit (ReLU), dilation factors and gate rotation unit (GRU) in order to extract noise features from CT images and improve the performance of the DRL model. Also, the quality of the CT images obtained by using the DRL model was compared to that obtained by using the supervised deep learning model. The results showed that the image accuracy for the DRL model was higher than that for the supervised deep learning model, and the image noise for the DRL model was smaller than that for the supervised deep learning model. Also, the DRL model reduced the noise of the CT images, which had different characteristics with training images. Therefore, the DRL model is able to reduce image noise as well as maintain the structural information of CT images.