• Title/Summary/Keyword: Multiple agent

Search Result 481, Processing Time 0.032 seconds

Adaptive and optimized agent placement scheme for parallel agent-based simulation

  • Jin, Ki-Sung;Lee, Sang-Min;Kim, Young-Chul
    • ETRI Journal
    • /
    • v.44 no.2
    • /
    • pp.313-326
    • /
    • 2022
  • This study presents a noble scheme for distributed and parallel simulations with optimized agent placement for simulation instances. The traditional parallel simulation has some limitations in that it does not provide sufficient performance even though using multiple resources. The main reason for this discrepancy is that supporting parallelism inevitably requires additional costs in addition to the base simulation cost. We present a comprehensive study of parallel simulation architectures, execution flows, and characteristics. Then, we identify critical challenges for optimizing large simulations for parallel instances. Based on our cost-benefit analysis, we propose a novel approach to overcome the performance constraints of agent-based parallel simulations. We also propose a solution for eliminating the synchronizing cost among local instances. Our method ensures balanced performance through optimal deployment of agents to local instances and an adaptive agent placement scheme according to the simulation load. Additionally, our empirical evaluation reveals that the proposed model achieves better performance than conventional methods under several conditions.

Connection Management Scheme using Mobile Agent System

  • Lim, Hee-Kyoung;Bae, Sang-Hyun;Lee, Kwang-Ok
    • Journal of Integrative Natural Science
    • /
    • v.11 no.4
    • /
    • pp.192-196
    • /
    • 2018
  • The mobile agent paradigm can be exploited in a variety of ways, ranging from low-level system administration tasks to middle ware to user-level applications. Mobile agents can be useful in building middle-ware services such as active mail systems, distributed collaboration systems, etc. An active mail message is a program that interacts with its recipient using a multimedia interface, and adapts the interaction session based on the recipient's responses. The mobile agent paradigm is well suitable to this type of application, since it can carry a sender-defined session protocol along with the multimedia message. Mobile agent communication is possible via method invocation on virtual references. Agents can make synchronous, one-way, or future-reply type invocations. Multicasting is possible, since agents can be aggregated hierarchically into groups. A simple check-pointing facility has also been implemented. Another proposed solution is to use multi agent computer systems to access, filter, evaluate, and integrate this information. We will present the overall architectural framework, our agent design commitments, and agent architecture to enable the above characteristics. Besides, the each information needed a mobile agent system such as text, graphic, image, audio and video etc, constructed a great capacity multimedia database system. However, they have problems in establishing connections over multiple subnetworks, such as no end-to-end connections, transmission delay due to ATM address resolution, no QoS protocols. We propose a new connection management scheme in the thesis to improve the connection management involved of mobile agent systems.

Using Potential Field for Modeling of the Work-environment and Task-sharing on the Multi-agent Cooperative Work

  • Makino, Tsutomu;Naruse, Keitarou;Yokoi, Hiroshi;Kakazu, Yikinori
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.37-44
    • /
    • 2001
  • This paper describes the modeling of work environment for the extraction of abstract operation rules for cooperative work with multiple agent. We propose the modeling method using a potential field. In the method, it is applied to a box pushing problem, which is to move a box from a start to a goal b multiple agent. The agents follow the potential value when they move and work in the work environment. The work environment is represented as the grid space. The potential field is generated by Genetic Algorithm(GA) for each agent. GA explores the positions of a potential peak value in the grid space, and then the potential value stretching in the grid space is spread by a potential diffusion function in each grid. However it is difficult to explore suitable setting using hand coding of the position of peak potential value. Thus, we use an evlolutionary computation way because it is possible to explore the large search space. So we make experiments the environment modeling using the proposed method and verify the performance of the exploration by GA. And we classify some types from acquired the environment model and extract the abstract operation rule, As results, we find out some types of the environment models and operation rules by the observation, and the performance of GA exploration is almost same as the hand coding set because these are nearly same performance on the evaluation of the consumption of agent's energy and the work step from point to the goal point.

  • PDF

Effect of coloring agent on the color of zirconia (Coloring agent가 지르코니아 색조 재현성에 미치는 영향)

  • Kim, Kwanghyun;Noh, Kwantae;Pae, Ahran;Woo, Yi-Hyung;Kim, Hyeong-Seob
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.55 no.1
    • /
    • pp.18-25
    • /
    • 2017
  • Purpose: The aim of this study was to evaluate the effect of two types of coloring agents and the number of application on the color of zirconia. Materials and methods: Monolithic zirconia specimens ($15.7mm{\times}15.7mm{\times}2.0mm$) (n = 33) was prepared and divided into 11 groups. Each experimental group was coded as a1-a5, w1-w5 according to the type of coloring agent and number of application. Specimens with no coloring agent applied were set as control group. The color difference of specimen was measured by using double-beam spectrophotometer, and calculated color difference (${{\Delta}E^*}_{ab}$), translucency parameter (TP). All data was analyzed with two-way ANOVA, multiple comparison $Sch{\acute{e}}ffe$ test, Pearson correlation and linear regression analysis. Results: As the number of application increased, values of $CIE\;L^*$ was decreased, but values of $CIE\;b^*$ was increased in both coloring agents. However, there was no significant difference on values of translucency parameter. The color difference range of each group was ${0.87{\Delta}E^*}_{ab}$ to ${9.43{\Delta}E^*}_{ab}$. Conclusion: In this study, type of coloring agent and the number of application did not affect the color difference of zirconia.

Localization and a Distributed Local Optimal Solution Algorithm for a Class of Multi-Agent Markov Decision Processes

  • Chang, Hyeong-Soo
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.3
    • /
    • pp.358-367
    • /
    • 2003
  • We consider discrete-time factorial Markov Decision Processes (MDPs) in multiple decision-makers environment for infinite horizon average reward criterion with a general joint reward structure but a factorial joint state transition structure. We introduce the "localization" concept that a global MDP is localized for each agent such that each agent needs to consider a local MDP defined only with its own state and action spaces. Based on that, we present a gradient-ascent like iterative distributed algorithm that converges to a local optimal solution of the global MDP. The solution is an autonomous joint policy in that each agent's decision is based on only its local state.cal state.

Learning soccer robot using genetic programming

  • Wang, Xiaoshu;Sugisaka, Masanori
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1999.10a
    • /
    • pp.292-297
    • /
    • 1999
  • Evolving in artificial agent is an extremely difficult problem, but on the other hand, a challenging task. At present the studies mainly centered on single agent learning problem. In our case, we use simulated soccer to investigate multi-agent cooperative learning. Consider the fundamental differences in learning mechanism, existing reinforcement learning algorithms can be roughly classified into two types-that based on evaluation functions and that of searching policy space directly. Genetic Programming developed from Genetic Algorithms is one of the most well known approaches belonging to the latter. In this paper, we give detailed algorithm description as well as data construction that are necessary for learning single agent strategies at first. In following step moreover, we will extend developed methods into multiple robot domains. game. We investigate and contrast two different methods-simple team learning and sub-group loaming and conclude the paper with some experimental results.

  • PDF

Location Analysis for Emergency Medical Service Vehicle in Sub District Area

  • Nanthasamroeng, Natthapong
    • Industrial Engineering and Management Systems
    • /
    • v.11 no.4
    • /
    • pp.339-345
    • /
    • 2012
  • This research aims to formulate a mathematical model and develop an algorithm for solving a location problem in emergency medical service vehicle parking. To find an optimal parking location which has the least risk score or risk priority number calculated from severity, occurrence, detection, and distance from parking location for emergency patients, data were collected from Pratoom sub-district Disaster Prevention and Mitigation Center from October 2010 to April 2011. The criteria of risk evaluation were modified from Automotive Industry Action Group's criteria. An adaptive simulated annealing algorithm with multiple cooling schedules called multi-agent simulated quenching (MASQ) is proposed for solving the problem in two schemes of algorithms including dual agent and triple agent quenching. The result showed that the solution obtained from both scheme of MASQ was better than the traditional solution. The best locations obtained from MASQ-dual agent quenching scheme was nodes #5 and #133. The risk score was reduced 61% from 6,022 to 2,371 points.

An Information Filtering Agent in a Flexible Message System

  • JUN, Youngcook;SHIRATORI, Norio
    • Educational Technology International
    • /
    • v.6 no.1
    • /
    • pp.65-79
    • /
    • 2005
  • In a widely distributed environment, many occasions arise when people need to filter informationwith email clients. The existing information agents such as Maxims and Message Assistant have capabilities of filtering email messages either by an autonomous agent or by user-defined rules. FlexMA, a variation of FAMES (Flexible Asynchronous Messaging System) is proposed as an information filtering agent. Agents in our system can be scaled up to adapt user's various demands by controlling messages delivered among heterogeneous email clients. Several functionalities are split into each agent in terms of component configuration with the addition of multiple agents'cooperation and negotiation. User-defined rules are collected and executed by these agents in a semi-autonomous manner. This paper demonstrates how this design is feasible in a flexible message system.

Deep reinforcement learning for a multi-objective operation in a nuclear power plant

  • Junyong Bae;Jae Min Kim;Seung Jun Lee
    • Nuclear Engineering and Technology
    • /
    • v.55 no.9
    • /
    • pp.3277-3290
    • /
    • 2023
  • Nuclear power plant (NPP) operations with multiple objectives and devices are still performed manually by operators despite the potential for human error. These operations could be automated to reduce the burden on operators; however, classical approaches may not be suitable for these multi-objective tasks. An alternative approach is deep reinforcement learning (DRL), which has been successful in automating various complex tasks and has been applied in automation of certain operations in NPPs. But despite the recent progress, previous studies using DRL for NPP operations have limitations to handle complex multi-objective operations with multiple devices efficiently. This study proposes a novel DRL-based approach that addresses these limitations by employing a continuous action space and straightforward binary rewards supported by the adoption of a soft actor-critic and hindsight experience replay. The feasibility of the proposed approach was evaluated for controlling the pressure and volume of the reactor coolant while heating the coolant during NPP startup. The results show that the proposed approach can train the agent with a proper strategy for effectively achieving multiple objectives through the control of multiple devices. Moreover, hands-on testing results demonstrate that the trained agent is capable of handling untrained objectives, such as cooldown, with substantial success.

Task Reallocation in Multi-agent Systems Based on Vickrey Auctioning (Vickrey 경매에 기초한 다중 에이전트 시스템에서의 작업 재할당)

  • Kim, In-Cheol
    • The KIPS Transactions:PartB
    • /
    • v.8B no.6
    • /
    • pp.601-608
    • /
    • 2001
  • The automated assignment of multiple tasks to executing agents is a key problem in the area of multi-agent systems. In many domains, significant savings can be achieved by reallocating tasks among agents with different costs for handling tasks. The automation of task reallocation among self-interested agents requires that the individual agents use a common negotiation protocol that prescribes how they have to interact in order to come to an agreement on "who does what". In this paper, we introduce the multi-agent Traveling Salesman Problem(TSP) as an example of task reallocation problem, and suggest the Vickery auction as an interagent negotiation protocol for solving this problem. In general, auction-based protocols show several advantageous features: they are easily implementable, they enforce an efficient assignment process, and they guarantce an agreement even in scenarios in which the agents possess only very little domain-specific Knowledge. Furthermore Vickrey auctions have the additional advantage that each interested agent bids only once and that the dominant strategy is to bid one′s true valuation. In order to apply this market-based protocol into task reallocation among self-interested agents, we define the profit of each agent, the goal of negotiation, tasks to be traded out through auctions, the bidding strategy, and the sequence of auctions. Through several experiments with sample multi-agent TSPs, we show that the task allocation can improve monotonically at each step and then finally an optimal task allocation can be found with this protocol.

  • PDF