• Title/Summary/Keyword: Learning Agent

Search Result 448, Processing Time 0.026 seconds

Intelligent Agent System by Self Organizing Neural Network

  • Cho, Young-Im
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1468-1473
    • /
    • 2005
  • In this paper, I proposed the INTelligent Agent System by Kohonen's Self Organizing Neural Network (INTAS). INTAS creates each user's profile from the information. Based on it, learning community grouping suitable to each individual is automatically executed by using unsupervised learning algorithm. In INTAS, grouping and learning are automatically performed on real time by multiagents, regardless of the number of learners. A new framework has been proposed to generate multiagents, and it is a feature that efficient multiagents can be executed by proposing a new negotiation mode between multiagents..

  • PDF

Research of Foresight Knowledge by CMAC based Q-learning in Inhomogeneous Multi-Agent System

  • Hoshino, Yukinobu;Sakakura, Akira;Kamei, Katsuari
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.280-283
    • /
    • 2003
  • A purpose of our research is an acquisition of cooperative behaviors in inhomogeneous multi-agent system. In this research, we used the fire panic problem as an experiment environment. In Fire panic problem a fire exists in the environment, and follows in each steps of agent's behavior, and this fire spreads within the constant law. The purpose of the agent is to reach the goal established without touching the fire, which exists in the environment. The fire heat up by a few steps, which exists in the environment. The fire has unsureness to the agent. The agent has to avoid a fire, which is spreading in environment. The acquisition of the behavior to reach it to the goal is required. In this paper, we observe how agents escape from the fire cooperating with other agents. For this problem, we propose a unique CMAC based Q-learning system for inhomogeneous multi-agent system.

  • PDF

A Naive Bayesian-based Model of the Opponent's Policy for Efficient Multiagent Reinforcement Learning (효율적인 멀티 에이전트 강화 학습을 위한 나이브 베이지만 기반 상대 정책 모델)

  • Kwon, Ki-Duk
    • Journal of Internet Computing and Services
    • /
    • v.9 no.6
    • /
    • pp.165-177
    • /
    • 2008
  • An important issue in Multiagent reinforcement learning is how an agent should learn its optimal policy in a dynamic environment where there exist other agents able to influence its own performance. Most previous works for Multiagent reinforcement learning tend to apply single-agent reinforcement learning techniques without any extensions or require some unrealistic assumptions even though they use explicit models of other agents. In this paper, a Naive Bayesian based policy model of the opponent agent is introduced and then the Multiagent reinforcement learning method using this model is explained. Unlike previous works, the proposed Multiagent reinforcement learning method utilizes the Naive Bayesian based policy model, not the Q function model of the opponent agent. Moreover, this learning method can improve learning efficiency by using a simpler one than other richer but time-consuming policy models such as Finite State Machines(FSM) and Markov chains. In this paper, the Cat and Mouse game is introduced as an adversarial Multiagent environment. And then effectiveness of the proposed Naive Bayesian based policy model is analyzed through experiments using this game as test-bed.

  • PDF

Multi-Agent Control Strategy using Reinforcement Leaning (강화학습을 이용한 다중 에이전트 제어 전략)

  • 이형일
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.5
    • /
    • pp.937-944
    • /
    • 2003
  • The most important problems in the multi-agent system are to accomplish a gnat through the efficient coordination of several agents and to prevent collision with other agents. In this paper, we propose a new control strategy for succeeding the goal of a prey pursuit problem efficiently Our control method uses reinforcement learning to control the multi-agent system and consider the distance as well as the space relationship among the agents in the state space of the prey pursuit problem.

  • PDF

A Study on Web Based Intelligent Tutoring System for Collaborative Learning : A Case of Scheduling Agents Systems for Figure Learning (협력학습을 위한 웹 기반 지능형 교수 시스템에 관한 연구 : 도형학습을 위한 스케줄링 에이전트 시스템을 중심으로)

  • 한선관;김세형;조근식
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.10a
    • /
    • pp.269-279
    • /
    • 1999
  • 본 연구는 Web상에서 원격 협력 학습을 위한 수준별 학습자 모집 스케줄링 에이전트의 설계와 구현에 관해 제안한다. 본 시스템의 구조는 원격 교사 모듈과 여러 명의 학습자, 그리고 이를 연결해 주는 스케줄링 Agents, 학습자를 진단할 수 있는 진단 Agent로 구성된다. 컴퓨터가 분산환경으로 발전됨에 따라서 교육의 변화도 가속화되었고, 지식의 공유와 정보의 공유가 원격 협력학습에 의하여 절실히 필요하게 되었다. 원격 협력 학습에서의 학습자는 동일한 과목과 주제에 흥미를 느끼는 여러 명의 아동이 동시에 학습할 수 있는 상황이 필요하며, 선행 지식 또한 비슷한 수준이어야 동일한 주제로 학습의 효과가 있다. 이런 학습자를 판단하기 위해서 진단 Agent가 학습자를 진단하며 스케줄링 Agents의 학습자 지식에 추가한 후 스케줄링 Agents가 학습자의 기본 사항과 요구 내용을 추론하여 비슷한 수준의 학습자를 연결한다. 교사 모듈은 전통적인 ITS의 구조의 교수 학습 모듈, 전문가모듈로 구성되어 교수 학습을 할 수 있다. 이렇게 여러 명의 학습자를 연결하여 협력학습을 하기 위해서는 학습자간의 요구사항과 지식 수준 그리고 학습 가능한 시간이 같아야 하는데 이를 위해 시간을 자원으로 하는 동적 자원 스케줄링(Dynamic Resource Scheduling)으로 모델링 하였다. 본 연구에서 도형학습을 기반으로 하는 실험을 통해 구현한 원격 협력학습을 위한 지능형 스케줄링 에이전트를 평가하였다.

  • PDF

The Automatic Coordination Model for Multi-Agent System Using Learning Method (학습기법을 이용한 멀티 에이전트 시스템 자동 조정 모델)

  • Lee, Mal-Rye;Kim, Sang-Geun
    • The KIPS Transactions:PartB
    • /
    • v.8B no.6
    • /
    • pp.587-594
    • /
    • 2001
  • Multi-agent system fits to the distributed and open internet environments. In a multi-agent system, agents must cooperate with each other through a coordination procedure, when the conflicts between agents arise. Where those are caused by the point that each action acts for a purpose separately without coordination. But previous researches for coordination methods in multi-agent system have a deficiency that they cannot solve correctly the cooperation problem between agents, which have different goals in dynamic environment. In this paper, we suggest the automatic coordination model for multi-agent system using neural network and reinforcement learning in dynamic environment. We have competitive experiment between multi-agents that have complexity environment and diverse activity. And we analysis and evaluate effect of activity of multi-agents. The results show that the proposed method is proper.

  • PDF

Comparison of Reinforcement Learning Algorithms for a 2D Racing Game Learning Agent (2D 레이싱 게임 학습 에이전트를 위한 강화 학습 알고리즘 비교 분석)

  • Lee, Dongcheul
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.1
    • /
    • pp.171-176
    • /
    • 2020
  • Reinforcement learning is a well-known method for training an artificial software agent for a video game. Even though many reinforcement learning algorithms have been proposed, their performance was varies depending on an application area. This paper compares the performance of the algorithms when we train our reinforcement learning agent for a 2D racing game. We defined performance metrics to analyze the results and plotted them into various graphs. As a result, we found ACER (Actor Critic with Experience Replay) achieved the best rewards than other algorithms. There was 157% gap between ACER and the worst algorithm.

Solving Continuous Action/State Problem in Q-Learning Using Extended Rule Based Fuzzy Inference System

  • Kim, Min-Soeng;Lee, Ju-Jang
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.3 no.3
    • /
    • pp.170-175
    • /
    • 2001
  • Q-learning is a kind of reinforcement learning where the agent solves the given task based on rewards received from the environment. Most research done in the field of Q-learning has focused on discrete domains, although the environment with which the agent must interact is generally continuous. Thus we need to devise some methods that enable Q-learning to be applicable to the continuous problem domain. In this paper, an extended fuzzy rule is proposed so that it can incorporate Q-learning. The interpolation technique, which is widely used in memory-based learning, is adopted to represent the appropriate Q value for current state and action pair in each extended fuzzy rule. The resulting structure based on the fuzzy inference system has the capability of solving the continuous state about the environment. The effectiveness of the proposed structure is shown through simulation on the cart-pole system.

  • PDF

A study on environmental adaptation and expansion of intelligent agent (지능형 에이전트의 환경 적응성 및 확장성)

  • Baek, Hae-Jung;Park, Young-Tack
    • The KIPS Transactions:PartB
    • /
    • v.10B no.7
    • /
    • pp.795-802
    • /
    • 2003
  • To live autonomously, intelligent agents such as robots or virtual characters need ability that recognizes given environment, and learns and chooses adaptive actions. So, we propose an action selection/learning mechanism in intelligent agents. The proposed mechanism employs a hybrid system which integrates a behavior-based method using the reinforcement learning and a cognitive-based method using the symbolic learning. The characteristics of our mechanism are as follows. First, because it learns adaptive actions about environment using reinforcement learning, our agents have flexibility about environmental changes. Second, because it learns environmental factors for the agent's goals using inductive machine learning and association rules, the agent learns and selects appropriate actions faster in given surrounding and more efficiently in extended surroundings. Third, in implementing the intelligent agents, we considers only the recognized states which are found by a state detector rather than by all states. Because this method consider only necessary states, we can reduce the space of memory. And because it represents and processes new states dynamically, we can cope with the change of environment spontaneously.

Comparison of Deep Learning Activation Functions for Performance Improvement of a 2D Shooting Game Learning Agent (2D 슈팅 게임 학습 에이전트의 성능 향상을 위한 딥러닝 활성화 함수 비교 분석)

  • Lee, Dongcheul;Park, Byungjoo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.2
    • /
    • pp.135-141
    • /
    • 2019
  • Recently, there has been active researches about building an artificial intelligence agent that can learn how to play a game by using re-enforcement learning. The performance of the learning can be diverse according to what kinds of deep learning activation functions they used when they train the agent. This paper compares the activation functions when we train our agent for learning how to play a 2D shooting game by using re-enforcement learning. We defined performance metrics to analyze the results and plotted them along a training time. As a result, we found ELU (Exponential Linear Unit) with a parameter 1.0 achieved best rewards than other activation functions. There was 23.6% gap between the best activation function and the worst activation function.