• 제목/요약/키워드: Multi-Agents

Search Result 545, Processing Time 0.025 seconds

Effects of Multi-Complex Agent Addition on Characteristics of Electroless Ni-P Solution (복합 착화제 첨가가 무전해 Ni-P 도금액의 특성에 미치는 영향)

  • Lee, Hong-Kee;Lee, Ho-Nyun;Jeon, Jun-Mi;Hur, Jin-Young
    • Journal of the Korean institute of surface engineering
    • /
    • v.43 no.2
    • /
    • pp.111-120
    • /
    • 2010
  • In this study, the effects of multi-complex agents addition on characteristics of electroless Ni plating solution are investigated. The species and the concentration of complexing agents are major factors to control the deposition rate, P concentration, and surface morphology of plating films. Adipic acid increases the deposition rate in regardless of single- or mutli-complex agent addition. However, lactic acid effectively increases the deposition rate in case of multi-addition as the complex agents with adipic or sodium succinate acid. In addition, sodium citric acid and malic acid show good stabilizing effects of plating solution and lowering the deposition rate, because they have high complexibility. Therefore, it is suggested that the development of Ni-P plating solution suitable for diverse usages can be carried out systematically using the database from this study.

A Performance Improvement Technique for Nash Q-learning using Macro-Actions (매크로 행동을 이용한 내시 Q-학습의 성능 향상 기법)

  • Sung, Yun-Sik;Cho, Kyun-Geun;Um, Ky-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.3
    • /
    • pp.353-363
    • /
    • 2008
  • A multi-agent system has a longer learning period and larger state-spaces than a sin91e agent system. In this paper, we suggest a new method to reduce the learning time of Nash Q-learning in a multi-agent environment. We apply Macro-actions to Nash Q-learning to improve the teaming speed. In the Nash Q-teaming scheme, when agents select actions, rewards are accumulated like Macro-actions. In the experiments, we compare Nash Q-learning using Macro-actions with general Nash Q-learning. First, we observed how many times the agents achieve their goals. The results of this experiment show that agents using Nash Q-learning and 4 Macro-actions have 9.46% better performance than Nash Q-learning using only 4 primitive actions. Second, when agents use Macro-actions, Q-values are accumulated 2.6 times more. Finally, agents using Macro-actions select less actions about 44%. As a result, agents select fewer actions and Macro-actions improve the Q-value's update. It the agents' learning speeds improve.

  • PDF

Smart Agents and Multimedia Systems

  • Kim, Steven H.
    • Proceedings of the Korea Database Society Conference
    • /
    • 1997.10a
    • /
    • pp.215-269
    • /
    • 1997
  • Outline $\textbullet$ Introduction $\textbullet$ Multimedia - Types of Data - Motivation - Key issue - Hardware Products - Application Areas $\textbullet$ Agents - Rationale for Agents - Sedentary vs. Mobile - Functional Categories - Application Areas $\textbullet$ Data Mining - 2-D Framework for Data Mining Tools - Classification of Tool - Application Areas - Learning Methodologies * Case Based Reasoning * Neural Networks * Statistical Learning: Orthogonal Arrays * Multi-strategy Learning $\textbullet$ Case Study - Finbot $\textbullet$ Conclusion

  • PDF

Q-learning for intersection traffic flow Control based on agents

  • Zhou, Xuan;Chong, Kil-To
    • Proceedings of the IEEK Conference
    • /
    • 2009.05a
    • /
    • pp.94-96
    • /
    • 2009
  • In this paper, we present the Q-learning method for adaptive traffic signal control on the basis of multi-agent technology. The structure is composed of sixphase agents and one intersection agent. Wireless communication network provides the possibility of the cooperation of agents. As one kind of reinforcement learning, Q-learning is adopted as the algorithm of the control mechanism, which can acquire optical control strategies from delayed reward; furthermore, we adopt dynamic learning method instead of static method, which is more practical. Simulation result indicates that it is more effective than traditional signal system.

  • PDF

A Study on Negotiation-based Scheduling using Intelligent Agents (지능형 이에전트를 이용한 협상 기반의 일정계획에 관한 연구)

  • 김성희;강무진
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.348-352
    • /
    • 2000
  • Intelligent agents represent parts and manufacturing resources, which cooperate, negotiate, and compete with each other. The negotiation between agents is in general based on the Contract-Net-Protocol. This paper describes a new approach to negotiation-based job shop scheduling. The proposed method includes multi-negotiation strategy as well as single-negotiation. A case study showing the comparison of various negotiation strategies is also given.

  • PDF

Multi-agent Negotiation System for Class Scheduling

  • Gwon Cheol Hyeon;Park Seong Ju
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2002.05a
    • /
    • pp.863-870
    • /
    • 2002
  • The current class scheduling has difficulties in reflecting students' preferences for the classes that they want to take and forecasting the demands of classes. Also, it is usually a repetitive and tedious work to allocate classes to limited time and cesourres Although many research studios in task allocation and meeting scheduling intend to solve similar problems, they have limitations to be directly applied to the class-scheduling problem. In this paper. a class scheduling system using multi agents-based negotiation is suggested. This system consists of student agents, professor agents and negotiation agents each agent arts in accordance with its respective human user's preference and performs the repetitive and tedious process instead of the user The suggested system utilizes negotiation cost concept to derive coalition in the agent's negotiation. The negotiation cost is derived from users' bidding prices on classes, where each biding price represents a user's preference on a selected class. The experiments were performed to verify the negotiation model in the scheduling system. The result of the experiment showed that it could produce a feasible scheduling solution minimizing the negotiation cost and reflecting the users' performance. The performance of the experiments was evaluated by a class success ratio.

  • PDF

Transformer-Based MUM-T Situation Awareness: Agent Status Prediction (트랜스포머 기반 MUM-T 상황인식 기술: 에이전트 상태 예측)

  • Jaeuk Baek;Sungwoo Jun;Kwang-Yong Kim;Chang-Eun Lee
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.4
    • /
    • pp.436-443
    • /
    • 2023
  • With the advancement of robot intelligence, the concept of man and unmanned teaming (MUM-T) has garnered considerable attention in military research. In this paper, we present a transformer-based architecture for predicting the health status of agents, with the help of multi-head attention mechanism to effectively capture the dynamic interaction between friendly and enemy forces. To this end, we first introduce a framework for generating a dataset of battlefield situations. These situations are simulated on a virtual simulator, allowing for a wide range of scenarios without any restrictions on the number of agents, their missions, or their actions. Then, we define the crucial elements for identifying the battlefield, with a specific emphasis on agents' status. The battlefield data is fed into the transformer architecture, with classification headers on top of the transformer encoding layers to categorize health status of agent. We conduct ablation tests to assess the significance of various factors in determining agents' health status in battlefield scenarios. We conduct 3-Fold corss validation and the experimental results demonstrate that our model achieves a prediction accuracy of over 98%. In addition, the performance of our model are compared with that of other models such as convolutional neural network (CNN) and multi layer perceptron (MLP), and the results establish the superiority of our model.

Location Management & Message Delivery Protocol for Multi-region Mobile Agents in Multi-region Environment (다중 지역 환경에서 이동 에이전트를 위한 위치 관리 및 메시지 전달 기법)

  • Choi, Sung-Jin;Baik, Maeng-Soon;Song, Ui-Sung;Hwang, Chong-Sun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.11
    • /
    • pp.545-561
    • /
    • 2007
  • Location management and message delivery protocol is fundamental to the further development of mobile agent systems in a multi-region mobile agent computing environment in order to control mobile agents and guarantee message delivery between them. However, previous works have some problems when they are applied to a multi-region mobile agent computing environment. First, the cost of location management and message delivery is increased relatively. Second, a tracking problem arises. finally, cloned mobile agents and parent-child mobile agents do not get dealt with respect to location management and message delivery. In this paper, we present a HB (Home-Blackboard) protocol, which is a new location management and message delivery protocol in a multi-region mobile agent computing environment. The HB protocol places a region server in each region and manages the location of mobile agents by using intra-region migration and inter-region migration. It also places a blackboard in each region server and delivers messages to mobile agents when a region server receives location update form them. The HB protocol can decrease the cost of location update and message passing and solve the tracking problem with low communication cost. Also, this protocol deals with the location management and message passing of cloned mobile agents and parent-child mobile agents, so that it can guarantee message delivery of these mobile agents and pass messages without passing duplicate messages.

Correlation Between Skin Irritation and Cytotoxicity of Anti-wrinkle Agents (화장품 원료의 피부자극성과 세포독성의 관련성)

  • 이은희;이종권;김용규;박기숙;안광수
    • YAKHAK HOEJI
    • /
    • v.45 no.3
    • /
    • pp.310-319
    • /
    • 2001
  • To compare skin irritation and cytotoxicity of anti-wrinkle agents, we examined skin irritation of six anti-wrinkle agents (ascorbic acid, glycolic acid, all trans-retinoic acid, ginseng extract, retinol, EB) in New Zealand white rabbit. Cytotoxicity of these agents was determined by MTT [tetrazolium salt 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide] at multi-time points in cultured HaCaT cell, a human immortalized keratinocyte cell. We then analyzed correlation between skin irritation and cytotoxicity by spearman's rank correlation analysis. All trans-retinoic acid showed the highest primary irritation index (0.92) in skin irritation test. Being all the six agents not irritant, retinal showed the most cytotoxic agents. The correlation between skin irritation and cytotoxicity ($IC_{50}$/ at different time point was 0.814, 0.757, 0.814 and 0.7 at 3, 24, 48 and 72 h, respectively. We also fecund that IC$_{20}$ and IC$_{80}$ of these agents showed similar correlation with skin irritation. These results therefore demonstrated that there is close correlation between skin irritation and cytotoxicity $IC_{50}$/ value by MTT in HaCaT cell at early time points by anti-wrinkle agents or IC$_{20}$ value. $IC_{50}$/ at earily time point or IC$_{20}$ values may be reliable alternative determinant of skin irritation.n.

  • PDF

Research of Foresight Knowledge by CMAC based Q-learning in Inhomogeneous Multi-Agent System

  • Hoshino, Yukinobu;Sakakura, Akira;Kamei, Katsuari
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.280-283
    • /
    • 2003
  • A purpose of our research is an acquisition of cooperative behaviors in inhomogeneous multi-agent system. In this research, we used the fire panic problem as an experiment environment. In Fire panic problem a fire exists in the environment, and follows in each steps of agent's behavior, and this fire spreads within the constant law. The purpose of the agent is to reach the goal established without touching the fire, which exists in the environment. The fire heat up by a few steps, which exists in the environment. The fire has unsureness to the agent. The agent has to avoid a fire, which is spreading in environment. The acquisition of the behavior to reach it to the goal is required. In this paper, we observe how agents escape from the fire cooperating with other agents. For this problem, we propose a unique CMAC based Q-learning system for inhomogeneous multi-agent system.

  • PDF