• Title/Summary/Keyword: reinforcement algorithms

Search Result 148, Processing Time 0.025 seconds

Pacman Game Reinforcement Learning Using Artificial Neural-network and Genetic Algorithm (인공신경망과 유전 알고리즘을 이용한 팩맨 게임 강화학습)

  • Park, Jin-Soo;Lee, Ho-Jeong;Hwang, Doo-Yeon;Cho, Soosun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.5
    • /
    • pp.261-268
    • /
    • 2020
  • Genetic algorithms find the optimal solution by mimicking the evolution of natural organisms. In this study, the genetic algorithm was used to enable Pac-Man's reinforcement learning, and a simulator to observe the evolutionary process was implemented. The purpose of this paper is to reinforce the learning of the Pacman AI of the simulator, and utilize genetic algorithm and artificial neural network as the method. In particular, by building a low-power artificial neural network and applying it to a genetic algorithm, it was intended to increase the possibility of implementation in a low-power embedded system.

Research Trends on Deep Reinforcement Learning (심층 강화학습 기술 동향)

  • Jang, S.Y.;Yoon, H.J.;Park, N.S.;Yun, J.K.;Son, Y.S.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.4
    • /
    • pp.1-14
    • /
    • 2019
  • Recent trends in deep reinforcement learning (DRL) have revealed the considerable improvements to DRL algorithms in terms of performance, learning stability, and computational efficiency. DRL also enables the scenarios that it covers (e.g., partial observability; cooperation, competition, coexistence, and communications among multiple agents; multi-task; decentralized intelligence) to be vastly expanded. These features have cultivated multi-agent reinforcement learning research. DRL is also expanding its applications from robotics to natural language processing and computer vision into a wide array of fields such as finance, healthcare, chemistry, and even art. In this report, we briefly summarize various DRL techniques and research directions.

Visual Object Manipulation Based on Exploration Guided by Demonstration (시연에 의해 유도된 탐험을 통한 시각 기반의 물체 조작)

  • Kim, Doo-Jun;Jo, HyunJun;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.1
    • /
    • pp.40-47
    • /
    • 2022
  • A reward function suitable for a task is required to manipulate objects through reinforcement learning. However, it is difficult to design the reward function if the ample information of the objects cannot be obtained. In this study, a demonstration-based object manipulation algorithm called stochastic exploration guided by demonstration (SEGD) is proposed to solve the design problem of the reward function. SEGD is a reinforcement learning algorithm in which a sparse reward explorer (SRE) and an interpolated policy using demonstration (IPD) are added to soft actor-critic (SAC). SRE ensures the training of the critic of SAC by collecting prior data and IPD limits the exploration space by making SEGD's action similar to the expert's action. Through these two algorithms, the SEGD can learn only with the sparse reward of the task without designing the reward function. In order to verify the SEGD, experiments were conducted for three tasks. SEGD showed its effectiveness by showing success rates of more than 96.5% in these experiments.

Reinforcement Learning-Based Intelligent Decision-Making for Communication Parameters

  • Xie, Xia.;Dou, Zheng;Zhang, Yabin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.9
    • /
    • pp.2942-2960
    • /
    • 2022
  • The core of cognitive radio is the problem concerning intelligent decision-making for communication parameters, the objective of which is to find the most appropriate parameter configuration to optimize transmission performance. The current algorithms have the disadvantages of high dependence on prior knowledge, large amount of calculation, and high complexity. We propose a new decision-making model by making full use of the interactivity of reinforcement learning (RL) and applying the Q-learning algorithm. By simplifying the decision-making process, we avoid large-scale RL, reduce complexity and improve timeliness. The proposed model is able to find the optimal waveform parameter configuration for the communication system in complex channels without prior knowledge. Moreover, this model is more flexible than previous decision-making models. The simulation results demonstrate the effectiveness of our model. The model not only exhibits better decision-making performance in the AWGN channels than the traditional method, but also make reasonable decisions in the fading channels.

Prediction of the shear capacity of reinforced concrete slender beams without stirrups by applying artificial intelligence algorithms in a big database of beams generated by 3D nonlinear finite element analysis

  • Markou, George;Bakas, Nikolaos P.
    • Computers and Concrete
    • /
    • v.28 no.6
    • /
    • pp.533-547
    • /
    • 2021
  • Calculating the shear capacity of slender reinforced concrete beams without shear reinforcement was the subject of numerous studies, where the eternal problem of developing a single relationship that will be able to predict the expected shear capacity is still present. Using experimental results to extrapolate formulae was so far the main approach for solving this problem, whereas in the last two decades different research studies attempted to use artificial intelligence algorithms and available data sets of experimentally tested beams to develop new models that would demonstrate improved prediction capabilities. Given the limited number of available experimental databases, these studies were numerically restrained, unable to holistically address this problem. In this manuscript, a new approach is proposed where a numerically generated database is used to train machine-learning algorithms and develop an improved model for predicting the shear capacity of slender concrete beams reinforced only with longitudinal rebars. Finally, the proposed predictive model was validated through the use of an available ACI database that was developed by using experimental results on physical reinforced concrete beam specimens without shear and compressive reinforcement. For the first time, a numerically generated database was used to train a model for computing the shear capacity of slender concrete beams without stirrups and was found to have improved predictive abilities compared to the corresponding ACI equations. According to the analysis performed in this research work, it is deemed necessary to further enrich the current numerically generated database with additional data to further improve the dataset used for training and extrapolation. Finally, future research work foresees the study of beams with stirrups and deep beams for the development of improved predictive models.

Dynamic Positioning of Robot Soccer Simulation Game Agents using Reinforcement learning

  • Kwon, Ki-Duk;Cho, Soo-Sin;Kim, In-Cheol
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.59-64
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to chose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state- action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem. we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL)as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

Reinforcement Learning Approach to Agents Dynamic Positioning in Robot Soccer Simulation Games

  • Kwon, Ki-Duk;Kim, In-Cheol
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 2001.10a
    • /
    • pp.321-324
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement Beaming is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement loaming is different from supervised teaming in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement loaming algorithms like Q-learning do not require defining or loaming any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning(AMMQL) as an improvement of the existing Modular Q-Learning(MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

Development of an Actor-Critic Deep Reinforcement Learning Platform for Robotic Grasping in Real World (현실 세계에서의 로봇 파지 작업을 위한 정책/가치 심층 강화학습 플랫폼 개발)

  • Kim, Taewon;Park, Yeseong;Kim, Jong Bok;Park, Youngbin;Suh, Il Hong
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.2
    • /
    • pp.197-204
    • /
    • 2020
  • In this paper, we present a learning platform for robotic grasping in real world, in which actor-critic deep reinforcement learning is employed to directly learn the grasping skill from raw image pixels and rarely observed rewards. This is a challenging task because existing algorithms based on deep reinforcement learning require an extensive number of training data or massive computational cost so that they cannot be affordable in real world settings. To address this problems, the proposed learning platform basically consists of two training phases; a learning phase in simulator and subsequent learning in real world. Here, main processing blocks in the platform are extraction of latent vector based on state representation learning and disentanglement of a raw image, generation of adapted synthetic image using generative adversarial networks, and object detection and arm segmentation for the disentanglement. We demonstrate the effectiveness of this approach in a real environment.

A Dynamic Channel Assignment Method in Cellular Networks Using Reinforcement learning Method that Combines Supervised Knowledge (감독 지식을 융합하는 강화 학습 기법을 사용하는 셀룰러 네트워크에서 동적 채널 할당 기법)

  • Kim, Sung-Wan;Chang, Hyeong-Soo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.5
    • /
    • pp.502-506
    • /
    • 2008
  • The recently proposed "Potential-based" reinforcement learning (RL) method made it possible to combine multiple learnings and expert advices as supervised knowledge within an RL framework. The effectiveness of the approach has been established by a theoretical convergence guarantee to an optimal policy. In this paper, the potential-based RL method is applied to a dynamic channel assignment (DCA) problem in a cellular networks. It is empirically shown that the potential-based RL assigns channels more efficiently than fixed channel assignment, Maxavail, and Q-learning-based DCA, and it converges to an optimal policy more rapidly than other RL algorithms, SARSA(0) and PRQ-learning.

Development of Humanoid Robot HUMIC and Reinforcement Learning-based Robot Behavior Intelligence using Gazebo Simulator (휴머노이드 로봇 HUMIC 개발 및 Gazebo 시뮬레이터를 이용한 강화학습 기반 로봇 행동 지능 연구)

  • Kim, Young-Gi;Han, Ji-Hyeong
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.260-269
    • /
    • 2021
  • To verify performance or conduct experiments using actual robots, a lot of costs are needed such as robot hardware, experimental space, and time. Therefore, a simulation environment is an essential tool in robotics research. In this paper, we develop the HUMIC simulator using ROS and Gazebo. HUMIC is a humanoid robot, which is developed by HCIR Lab., for human-robot interaction and an upper body of HUMIC is similar to humans with a head, body, waist, arms, and hands. The Gazebo is an open-source three-dimensional robot simulator that provides the ability to simulate robots accurately and efficiently along with simulated indoor and outdoor environments. We develop a GUI for users to easily simulate and manipulate the HUMIC simulator. Moreover, we open the developed HUMIC simulator and GUI for other robotics researchers to use. We test the developed HUMIC simulator for object detection and reinforcement learning-based navigation tasks successfully. As a further study, we plan to develop robot behavior intelligence based on reinforcement learning algorithms using the developed simulator, and then apply it to the real robot.