• Title/Summary/Keyword: adaptive agent

Search Result 121, Processing Time 0.025 seconds

An Adaptive Information Filtering Agent based on User′s Combined Behaviors (사용자의 결합된 행동을 이용한 적응형 정보여과 에이전트)

  • 송용수;홍언주;오경환
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.04b
    • /
    • pp.268-270
    • /
    • 2002
  • 본 논문에서는 온라인 뉴스 기사를 여과하여 사용자에게 관련있는 뉴스기사만을 선별적으로 여과하여 보여주는 정보여과 에이전트를 설계 및 구현하였다. 정보여과의 핵심이라고 할 수 있는 정확한 사용자 프로파일 구축과 정보에 대한 사용자의 적합성 반응인 명시적 피드백과 암시적 피드백을 모두 결합한 피드백을 사용하여 사용자 프로파일을 좀 더 정교하게 구축하는 방법을 기술하였다. 실험을 통하여 사용자의 결합된 적합성 피드백 행동에 기반한 정보여과 에이전트의 성능이 단일의 피드백만을 사용했을 때보다 더 좋은 정확성과 적응성을 지니고 있음을 보여 주었다.

  • PDF

Adaptive Response in CHO Cells by Bleomycin, Mitomycin C and Cadmium (Bleomycin, Mitomycin C 및 Cadmium에 의한 CHO 세포의 적응반응)

  • 김양지;한정호;정해원
    • Journal of Environmental Health Sciences
    • /
    • v.18 no.2
    • /
    • pp.117-124
    • /
    • 1992
  • Pretreatment with low concentration of Bleomycin and Cadmium rendered Chinese Hamster Ovary Cells more resistant to the induction of chromosome aberration by subsequent high concentration of same agent, however Mitomycin C did not function in that way. The cells pre-exposed to low dose of Cadmitim did not show cross-resistance to challenge dose of Mitomycin C for the induction of chromosome aberration, but cells pre-exposed to Bleomycin showed cross resistance. And the cells pre-exposed to low dose of Mitomycin C showed cross resistance to challenge of Bleomycin, but Cadmium did not.

  • PDF

Dynamic Positioning of Robot Soccer Simulation Game Agents using Reinforcement learning

  • Kwon, Ki-Duk;Cho, Soo-Sin;Kim, In-Cheol
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.59-64
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to chose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state- action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem. we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL)as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

A Multi-Agent Platform Capable of Handling Ad Hoc Conversation Policies (Ad Hoc한 대화 정책을 지원하는 멀티 에이전트 플랫폼에 관한 연구)

  • Ahn, Hyung-Jun
    • The KIPS Transactions:PartD
    • /
    • v.11D no.5
    • /
    • pp.1177-1188
    • /
    • 2004
  • Multi-agent systems have been developed for supporting intelligent collaboration of distributed and independent software entities and are be-ing widely used for various applications. For the collaboration among agents, conversation policies (or interaction protocols) mutually agreed by agents are used. In today's dynamic electronic market environment, there can be frequent changes in conversation policies induced by the changes in transaction methods in the market, and thus, the importance of ad hoc conversation policies is increasing. In existing agent platforms, they allow the use of only several standard or fixed conversation policies, which requires inevitable re implementation for ad hoc conversation policies and leads to inefficiency and intricacy. This paper designs an agent platform that supports ad hoc conversation policies and presents the prototype implementation. The suggested system includes an exchangeable and interpretable conversation policy model, a meta conversation procedure for exchanging new conversation policies, and a mechanism for performing actual transactions with exchanged conversation policies in run time in an adaptive way.

Reinforcement Learning Approach to Agents Dynamic Positioning in Robot Soccer Simulation Games

  • Kwon, Ki-Duk;Kim, In-Cheol
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 2001.10a
    • /
    • pp.321-324
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement Beaming is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement loaming is different from supervised teaming in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement loaming algorithms like Q-learning do not require defining or loaming any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning(AMMQL) as an improvement of the existing Modular Q-Learning(MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

Adaptive Network Monitoring Strategy for SNMP-Based Network Management (SNMP 기반 네트워크관리를 위한 적응형 네트워크 모니터링 방법)

  • Cheon, Jin-young;Cheong, Jin-ha;Yoon, Wan-oh;Park, Sang-bang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.12C
    • /
    • pp.1265-1275
    • /
    • 2002
  • In the network management system, there are two approaches; the centralized approach based on SNMP and the distributed approach based on mobile agent. Some information changes with time and the manager needs to monitor its value in real time. In such a case, the polling is generally used in SNMP because the manager can query agents periodically. However, the polling scheme needs both request and response messages for management information every time, which results in network traffic increase. In this paper, we suggest an adaptive network monitoring method to reduce the network traffic for SNMP-based network management. In the proposed strategy, each agent first decides its on monitoring period. Then, the manager collects them and approves each agent's period without modification or adjusts it based on the total traffic generated by monitoring messages. After receiving response message containing monitoring period from the manager, each agent sends management information periodically without the request of manager. To evaluate performance of the proposed method, we implemented it and compared the network traffic and monitoring quality of the proposed scheme with the general polling method.

Design and implementation of Robot Soccer Agent Based on Reinforcement Learning (강화 학습에 기초한 로봇 축구 에이전트의 설계 및 구현)

  • Kim, In-Cheol
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.139-146
    • /
    • 2002
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to choose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input-output pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless these algorithms can learn the optimal policy if the agent can visit every state-action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem, we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL) as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state space effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. In this paper we use the AMMQL algorithn as a learning method for dynamic positioning of the robot soccer agent, and implement a robot soccer agent system called Cogitoniks.

A Mobility Management Scheme by Considering User Mobility in Internet (인터넷에서 사용자 이동성을 고려한 이동성 제어 방식)

  • Woo, Mi-Ae
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.2C
    • /
    • pp.123-130
    • /
    • 2002
  • To cope with the Internet host mobility in a cellular network environment, we propose an adaptive mobility management scheme that can compensate drawbacks of Mobile IP. We also propose protocol that supports the proposed scheme. Our proposed scheme determines foreign agent care-of addresses adaptively according to user mobility. Consequently, it is different from other proposals for micro mobility, which statically assign the gateway in the domain as a foreign agent. Using such a scheme, it is possible to effectively meet the users demands for different service qualities in the various environments considered in the cellular network and to reduce signaling overhead due to frequent handovers occurred in Mobile IP. The performance of the proposed scheme is examined by simulation. The results of simulation show that the proposed scheme can provide relatively stable points of attachment to the mobile node.

Finite-Time Sliding Mode Controller Design for Formation Control of Multi-Agent Mobile Robots (다중 에이전트 모바일 로봇 대형제어를 위한 유한시간 슬라이딩 모드 제어기 설계)

  • Park, Dong-Ju;Moon, Jeong-Whan;Han, Seong-Ik
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.3
    • /
    • pp.339-349
    • /
    • 2017
  • In this paper, we present a finite-time sliding mode control (FSMC) with an integral finite-time sliding surface for applying the concept of graph theory to a distributed wheeled mobile robot (WMR) system. The kinematic and dynamic property of the WMR system are considered simultaneously to design a finite-time sliding mode controller. Next, consensus and formation control laws for distributed WMR systems are derived by using the graph theory. The kinematic and dynamic controllers are applied simultaneously to compensate the dynamic effect of the WMR system. Compared to the conventional sliding mode control (SMC), fast convergence is assured and the finite-time performance index is derived using extended Lyapunov function with adaptive law to describe the uncertainty. Numerical simulation results of formation control for WMR systems shows the efficacy of the proposed controller.

An Intelligent Agent Based Supply Chain Operation Architecture under Adaptive Relationship between Multiple Suppliers and Customers (다수 수요자-공급자간 적응적 협력관계하의 지능형 에이전트 기반 공급망운영 구조)

  • 윤한성
    • Journal of Intelligence and Information Systems
    • /
    • v.9 no.1
    • /
    • pp.109-123
    • /
    • 2003
  • The relationship between suppliers and customers is treated importantly not only in the traditional business-to-business (BtoB) commerce but also in today's Internet environments. On the one hand, most of Internet-based BtoB commerce services like customer-centric e-procurement, supplier-centric e-sales or intermediary-centric e-marketplace focus mainly on the selection of partners according to bidding, auction, etc. This point may result in the problem of overlooking the relationships between suppliers and customers. To overcome this problem in this paper, an intelligent agents-based supply chain operation architecture is proposed and appraised considering the relationship and its adaptation.

  • PDF