• Title/Summary/Keyword: 강화 학습 에이전트

Search Result 132, Processing Time 0.023 seconds

Design of Rotary Inverted Pendulum System Using Reinforcement Learning (강화학습을 이용한 회전식 도립진자 시스템 설계)

  • Kim, Ju-Bong;Kwon, Do-Hyung;Hong, Yong-Geun;Kim, Min-Suk;Han, Youn-Hee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.705-707
    • /
    • 2018
  • Rotary Inverted Pendulum 은 제어분야에서 비선형 제어 시스템을 설명하기 위해 자주 사용되어왔다. 본 논문은 강화학습 에이전트의 환경으로써 Rotary Inverted Pendulum 을 도입하였다. 이를 통해서 강화학습이 실제 세계에서의 복합적인 문제를 해결할 수 있음을 보인다. 강화학습 에이전트의 가상 환경과 실제 환경을 맵핑시키기 위해서 Ethernet 연결 위에 MQTT 프로토콜을 사용하였으며 이를 통해서 경량화된 IoT 분야에서의 강화학습의 활용도를 조명한다.

An Automatic Cooperative coordination Model for the Multiagent System using Reinforcement Learning (강화학습을 이용한 멀티 에이전트 시스템의 자동 협력 조정 모델)

  • 정보윤;윤소정;오경환
    • Korean Journal of Cognitive Science
    • /
    • v.10 no.1
    • /
    • pp.1-11
    • /
    • 1999
  • Agent-based systems technology has generated lots of excitement in these years because of its promise as a new paradigm for conceptualizing. designing. and l implementing software systems Especially, there has been many researches for multi agent system because of the characteristics that it fits to the distributed and open Internet environments. In a multiagent system. agents must cooperate with each other through a Coordination procedure. when the conflicts between agents arise. where those are caused b by the point that each action acts for a purpose separately without coordination. But P previous researches for coordination methods in multi agent system have a deficiency that they can not solve correctly the cooperation problem between agents which have different goals in dynamic environment. In this paper. we solve the cooperation problem of multiagent that has multiple goals in a dynamic environment. with an automatic cooperative coordination model using I reinforcement learning. We will show the two pursuit problems that we extend a traditional problem in multi agent systems area for modeling the restriction in the multiple goals in a dynamic environment. and we have verified the validity of the proposed model with an experiment.

  • PDF

Intelligent Robot Design: Intelligent Agent Based Approach (지능로봇: 지능 에이전트를 기초로 한 접근방법)

  • Kang, Jin-Shig
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.457-467
    • /
    • 2004
  • In this paper, a robot is considered as an agent, a structure of robot is presented which consisted by multi-subagents and they have diverse capacity such as perception, intelligence, action etc., required for robot. Also, subagents are consisted by micro-agent($\mu$agent) charged for elementary action required. The structure of robot control have two sub-agents, the one is behavior based reactive controller and action selection sub agent, and action selection sub-agent select a action based on the high label action and high performance, and which have a learning mechanism based on the reinforcement learning. For presented robot structure, it is easy to give intelligence to each element of action and a new approach of multi robot control. Presented robot is simulated for two goals: chaotic exploration and obstacle avoidance, and fabricated by using 8bit microcontroller, and experimented.

Implementation of the Agent using Universal On-line Q-learning by Balancing Exploration and Exploitation in Reinforcement Learning (강화 학습에서의 탐색과 이용의 균형을 통한 범용적 온라인 Q-학습이 적용된 에이전트의 구현)

  • 박찬건;양성봉
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.672-680
    • /
    • 2003
  • A shopbot is a software agent whose goal is to maximize buyer´s satisfaction through automatically gathering the price and quality information of goods as well as the services from on-line sellers. In the response to shopbots´ activities, sellers on the Internet need the agents called pricebots that can help them maximize their own profits. In this paper we adopts Q-learning, one of the model-free reinforcement learning methods as a price-setting algorithm of pricebots. A Q-learned agent increases profitability and eliminates the cyclic price wars when compared with the agents using the myoptimal (myopically optimal) pricing strategy Q-teaming needs to select a sequence of state-action fairs for the convergence of Q-teaming. When the uniform random method in selecting state-action pairs is used, the number of accesses to the Q-tables to obtain the optimal Q-values is quite large. Therefore, it is not appropriate for universal on-line learning in a real world environment. This phenomenon occurs because the uniform random selection reflects the uncertainty of exploitation for the optimal policy. In this paper, we propose a Mixed Nonstationary Policy (MNP), which consists of both the auxiliary Markov process and the original Markov process. MNP tries to keep balance of exploration and exploitation in reinforcement learning. Our experiment results show that the Q-learning agent using MNP converges to the optimal Q-values about 2.6 time faster than the uniform random selection on the average.

Combining Imitation Learning and Reinforcement Learning for Visual-Language Navigation Agents (시각-언어 이동 에이전트를 위한 모방 학습과 강화 학습의 결합)

  • Oh, Suntaek;Kim, Incheol
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.05a
    • /
    • pp.559-562
    • /
    • 2020
  • 시각-언어 이동 문제는 시각 이해와 언어 이해 능력을 함께 요구하는 복합 지능 문제이다. 본 논문에서는 시각-언어 이동 에이전트를 위한 새로운 학습 모델을 제안한다. 이 모델은 데모 데이터에 기초한 모방 학습과 행동 보상에 기초한 강화 학습을 함께 결합한 복합 학습을 채택하고 있다. 따라서 이 모델은 데모 데이타에 편향될 수 있는 모방 학습의 문제와 상대적으로 낮은 데이터 효율성을 갖는 강화 학습의 문제를 상호 보완적으로 해소할 수 있다. 또한, 제안 모델은 서로 다른 두 학습 간에 발생 가능한 학습 불균형도 고려하여 손실 정규화를 포함하고 있다. 또, 제안 모델에서는 기존 연구들에서 사용되어온 목적지 기반 보상 함수의 문제점을 발견하고, 이를 해결하기 위해 설계된 새로은 최적 경로 기반 보상 함수를 이용한다. 본 논문에서는 Matterport3D 시뮬레이션 환경과 R2R 벤치마크 데이터 집합을 이용한 다양한 실들을 통해, 제안 모델의 높은 성능을 입증하였다.

A Performance Improvement Technique for Nash Q-learning using Macro-Actions (매크로 행동을 이용한 내시 Q-학습의 성능 향상 기법)

  • Sung, Yun-Sik;Cho, Kyun-Geun;Um, Ky-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.3
    • /
    • pp.353-363
    • /
    • 2008
  • A multi-agent system has a longer learning period and larger state-spaces than a sin91e agent system. In this paper, we suggest a new method to reduce the learning time of Nash Q-learning in a multi-agent environment. We apply Macro-actions to Nash Q-learning to improve the teaming speed. In the Nash Q-teaming scheme, when agents select actions, rewards are accumulated like Macro-actions. In the experiments, we compare Nash Q-learning using Macro-actions with general Nash Q-learning. First, we observed how many times the agents achieve their goals. The results of this experiment show that agents using Nash Q-learning and 4 Macro-actions have 9.46% better performance than Nash Q-learning using only 4 primitive actions. Second, when agents use Macro-actions, Q-values are accumulated 2.6 times more. Finally, agents using Macro-actions select less actions about 44%. As a result, agents select fewer actions and Macro-actions improve the Q-value's update. It the agents' learning speeds improve.

  • PDF

Proposal Realtime Reaction Generate Quest System Basement Reinforcement Learning Central System (강화학습 기반 실시간 반응형 퀘스트 생성 시스템 중앙 관리자 영향력 연구)

  • Kim-Tae Hun;Kim-Chang Jae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.499-500
    • /
    • 2023
  • 강화학습 기반의 다중 에이전트 시스템을 이용한 서버의 실시간 상황을 제공 받아서 상황에 알맞은 퀘스트를 생성해주는 시스템을 제안한다. 학습 가이드 역할을 하는 CTDE 의 중앙 관리자의 역할을 위한 에이전트를 분리하여 작동하게 함으로서 퀘스트의 지향점을 잡는 것이다.

FuzzyQ-Learning to Process the Vague Goals of Intelligent Agent (지능형 에이전트의 모호한 목적을 처리하기 위한 FuzzyQ-Learning)

  • 서호섭;윤소정;오경환
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04b
    • /
    • pp.271-273
    • /
    • 2000
  • 일반적으로, 지능형 에이전트는 사용자의 목적과 주위 환경으로부터 최적의 행동을 스스로 찾아낼 수 있어야 한다. 만약 에이전트의 목적이나 주위 환경이 불확실성을 포함하는 경우, 에이전트는 적절한 행동을 선택하기 어렵다. 그러나, 사용자의 목적이 인간 지식의 불확실성을 포함하는 언어값으로 표현되었을 경우, 이를 처리하려는 연구는 없었다. 본 논문에서는 모호한 사용자의 의도를 퍼지 목적으로 나타내고, 에이전트가 인지하는 불확실한 환경을 퍼지 상태로 표현하는 방법을 제안한다. 또, 퍼지 목적과 상태를 이용하여 확장한 펴지 강화 함수와를 이용하여, 기존 강화 학습 알고리즘 중 하나인 Q-Learning을 FuzzyQ-Learning으로 확장하고, 이에 대한 타당성을 검증한다.

  • PDF

A slide reinforcement learning for the consensus of a multi-agents system (다중 에이전트 시스템의 컨센서스를 위한 슬라이딩 기법 강화학습)

  • Yang, Janghoon
    • Journal of Advanced Navigation Technology
    • /
    • v.26 no.4
    • /
    • pp.226-234
    • /
    • 2022
  • With advances in autonomous vehicles and networked control, there is a growing interest in the consensus control of a multi-agents system to control multi-agents with distributed control beyond the control of a single agent. Since consensus control is a distributed control, it is bound to have delay in a practical system. In addition, it is often difficult to have a very accurate mathematical model for a system. Even though a reinforcement learning (RL) method was developed to deal with these issues, it often experiences slow convergence in the presence of large uncertainties. Thus, we propose a slide RL which combines the sliding mode control with RL to be robust to the uncertainties. The structure of a sliding mode control is introduced to the action in RL while an auxiliary sliding variable is included in the state information. Numerical simulation results show that the slide RL provides comparable performance to the model-based consensus control in the presence of unknown time-varying delay and disturbance while outperforming existing state-of-the-art RL-based consensus algorithms.

A Study about the Usefulness of Reinforcement Learning in Business Simulation Games using PPO Algorithm (경영 시뮬레이션 게임에서 PPO 알고리즘을 적용한 강화학습의 유용성에 관한 연구)

  • Liang, Yi-Hong;Kang, Sin-Jin;Cho, Sung Hyun
    • Journal of Korea Game Society
    • /
    • v.19 no.6
    • /
    • pp.61-70
    • /
    • 2019
  • In this paper, we apply reinforcement learning in the field of management simulation game to check whether game agents achieve autonomously given goal. In this system, we apply PPO (Proximal Policy Optimization) algorithm in the Unity Machine Learning (ML) Agent environment and the game agent is designed to automatically find a way to play. Five game scenario simulation experiments were conducted to verify their usefulness. As a result, it was confirmed that the game agent achieves the goal through learning despite the change of environment variables in the game.