• Title/Summary/Keyword: intelligent action

Search Result 280, Processing Time 0.022 seconds

Intelligent Robot Design: Intelligent Agent Based Approach (지능로봇: 지능 에이전트를 기초로 한 접근방법)

  • Kang, Jin-Shig
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.457-467
    • /
    • 2004
  • In this paper, a robot is considered as an agent, a structure of robot is presented which consisted by multi-subagents and they have diverse capacity such as perception, intelligence, action etc., required for robot. Also, subagents are consisted by micro-agent($\mu$agent) charged for elementary action required. The structure of robot control have two sub-agents, the one is behavior based reactive controller and action selection sub agent, and action selection sub-agent select a action based on the high label action and high performance, and which have a learning mechanism based on the reinforcement learning. For presented robot structure, it is easy to give intelligence to each element of action and a new approach of multi robot control. Presented robot is simulated for two goals: chaotic exploration and obstacle avoidance, and fabricated by using 8bit microcontroller, and experimented.

Intelligent Characters for Fighting Action Games applied Energy Points (대전형 액션 게임에서 에너지 점수를 도입한 지능 캐릭터)

  • Lee Myun-Sub;Cho Byeong-Heon;Jung Sung-Hoon;Seong Yeong-Rak;Oh Ha-Ryoung
    • The KIPS Transactions:PartB
    • /
    • v.13B no.4 s.107
    • /
    • pp.449-456
    • /
    • 2006
  • This paper proposes intelligent characters for fighting action games to which energy points are applied for more realistic implementation than those of previous researches. The intelligent characters decide their actions in consideration of their energy level as well as a current action, the step of the action, the distance, and past actions of opponent characters that were used in existing intelligent ones. We used two types of energy, HP(Health Point) and MP(Mana Point) that were frequently employed in recent on-line games. We experimented with proposed intelligent characters to investigate whether the intelligent characters learn proper actions and cope with opponent characters in consideration of their energy levels. Experimental results showed that the intelligent characters reacted with the best actions to obtain high score if their energy is sufficient, Otherwise, they did the actions to(that?) recharge their energy. From this observation, we could conclude that the proposed intelligent characters worked well and did effective actions in consideration of the their energy.

The division of action situation of collision avoidance in intelligent collision avoidance system

  • Zheng, Zhongyi;Wu, Zhaolin
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2001.10a
    • /
    • pp.114-119
    • /
    • 2001
  • Based on tole investigation on mariner’s behaviors in collision avoidance, actuality of collision avoidance at sea and the research on the uncertainty of collision avoidance behaviors adopted by two encounter vessels, and for the purpose to reduce the no-coordination action of collision avoidance between two encounter vessels, and on the base of different encounter situation in international convention for preventing collisions at sea, the concept of action situation between tee encounter vessels is proposed, and the directions for every encounter vessel to adopt course alteration to avoid collision are explained in different action situation. The mechanism of avoidance and reduction of no-coordination is established in intelligent collision avoidance system, and it is important id research on intelligent collision avoidance system.

  • PDF

Neural Networks Intelligent Characters for Learning and Reacting to Action Patterns of Opponent Characters In Fighting Action Games (대전 게임에서 상대방 캐릭터의 행동 패턴을 학습하여 대응하는 신경망 지능 캐릭터)

  • 조병헌;정성훈;성영락;오하령
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.6
    • /
    • pp.69-80
    • /
    • 2004
  • This paper proposes a method to learn action patterns of opponent characters for intelligent characters. For learning action patterns, intelligent characters learn the past actions as well as the current actions of opponent characters. Therefore, intelligent characters react more properly than ones without the knowledge on action patterns. In addition, this paper proposes a method to learn moving actions whose fitness is hard to evaluate. To evaluate the performance of the proposed algorithm, we experiment with four repeated action patterns in a game similar to real games. The results show that intelligent characters learn the optimal actions for action patterns and react properly against to random action opponent characters. The proposed method can be applied to various games in which characters confront each other, e.g. massively multiple of line games.

Fighting Action Games applied Energy Concepts (에너지 개념을 도입한 대전형 액션 게임)

  • Lee Myun-Sub
    • Journal of the Korea Computer Industry Society
    • /
    • v.7 no.3
    • /
    • pp.163-170
    • /
    • 2006
  • This paper proposes intelligent characters for fighting action games to which energy concepts are applied for more realistic implementation than those of previous researches. The intelligent characters decide their actions in consideration of their energy level as well as a current action, the step of the action, the distance, and past actions of opponent characters that were used in existing intelligent ones. We used two types of energy, HP(Health Point) and MP(Mana Point), that were frequently employed in recent on-line games. We experimented with proposed intelligent characters to investigate whether the intelligent characters loam proper actions and cope with opponent characters in consideration of their energy levels. Experimental results showed that the intelligent characters reacted to the best actions to obtain high score if their energy is sufficient, otherwise they did the actions to recharge their energy. From this observation, we could conclude that the proposed intelligent characters worked well and did effective actions in consideration to the their energy.

  • PDF

An Implementation of Intelligent Game Characters using Neural Networks (신경망을 이용한 지능형 게임 캐릭터의 구현)

  • Cho Byeong-heon;Jung Sung-hoon;Seong Yeong-rak;Oh Ha-ryoung
    • The KIPS Transactions:PartB
    • /
    • v.11B no.7 s.96
    • /
    • pp.831-840
    • /
    • 2004
  • In this paper, we propose a scheme to implement intelligent game characters based on neural networks. Neural networks that implement in-telligent game character receive the action of an opponent character and the distance between them, decide intelligent character's action, and output the decision. The neural networks are trained by reinforcement learning using the scores acquired by the actions of two characters as reinforcement values. To show the usefulness of the proposed scheme, a simple fighting action game is implemented and various experiments are performed. Experimental results show that proposed intelligent characters can learn the rule of the game. The proposed scheme can be ap-plied to massively multiple online games as well as fighting action games.

A situation-Flexible and Action-Oriented Cyber Response Mechanism against Intelligent Cyber Attack (지능형 사이버공격 대비 상황 탄력적 / 실행 중심의 사이버 대응 메커니즘)

  • Kim, Namuk;Eom, Jungho
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.16 no.3
    • /
    • pp.37-47
    • /
    • 2020
  • The In the 4th industrial revolution, cyber space will evolve into hyper-connectivity, super-convergence, and super-intelligence due to the development of advanced information and communication technologies, which will connect the nation's core infrastructure into a single network. As applying the 4th industrial revolution technology to the cyber attack technique, it is evolving in an intelligent and sophisticate method. In order to response intelligent cyber attacks, it is difficult to guarantee self-defense in cyberspace by policy-oriented, preplanned-centric and hierarchical cyber response strategies. Therefore, this research aims to propose a situation-flexible & action-oriented cyber response mechanism that can respond flexibly by selecting the most optimal smart security solution according to changes in the cyber attack steps. The proposed cyber response mechanism operates the smart security solutions according to the action-oriented detailed strategies. In addition, artificial intelligence-based decision-making systems are used to select the smart security technology with the best responsiveness.

Explicit Dynamic Coordination Reinforcement Learning Based on Utility

  • Si, Huaiwei;Tan, Guozhen;Yuan, Yifu;peng, Yanfei;Li, Jianping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.792-812
    • /
    • 2022
  • Multi-agent systems often need to achieve the goal of learning more effectively for a task through coordination. Although the introduction of deep learning has addressed the state space problems, multi-agent learning remains infeasible because of the joint action spaces. Large-scale joint action spaces can be sparse according to implicit or explicit coordination structure, which can ensure reasonable coordination action through the coordination structure. In general, the multi-agent system is dynamic, which makes the relations among agents and the coordination structure are dynamic. Therefore, the explicit coordination structure can better represent the coordinative relationship among agents and achieve better coordination between agents. Inspired by the maximization of social group utility, we dynamically construct a factor graph as an explicit coordination structure to express the coordinative relationship according to the utility among agents and estimate the joint action values based on the local utility transfer among factor graphs. We present the application of such techniques in the scenario of multiple intelligent vehicle systems, where state space and action space are a problem and have too many interactions among agents. The results on the multiple intelligent vehicle systems demonstrate the efficiency and effectiveness of our proposed methods.

An Implementation of Neural Networks Intelligent Characters for Fighting Action Games (대전 액션 게임을 위한 신경망 지능 캐릭터의 구현)

  • Cho, Byeong-Heon;Jung, Sung-Hoon;Seong, Yeong-Rak;Oh, Ha-Ryoung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.383-389
    • /
    • 2004
  • This paper proposes a method to provide intelligence for characters in fighting action games by using a neural network. Each action takes several time units in general fighting action games. Thus the results of a character's action are not exposed immediately but some time units later. To design a suitable neural network for such characters, it is very important to decide when the neural network is taught and which values are used to teach the neural network. The fitness of a character's action is determined according to the scores. For learning, the decision causing the score is identified, and then the neural network is taught by using the score change, the previous input and output values which were applied when the decision was fixed. To evaluate the performance of the proposed algorithm, many experiments are executed on a simple action game (but very similar to the actual fighting action games) environment. The results show that the intelligent character trained by the proposed algorithm outperforms random characters by 3.6 times at most. Thus we can conclude that the intelligent character properly reacts against the action of the opponent. The proposed method can be applied to various games in which characters confront each other, e.g. massively multiple online games.

Actor-Critic Algorithm with Transition Cost Estimation

  • Sergey, Denisov;Lee, Jee-Hyong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.4
    • /
    • pp.270-275
    • /
    • 2016
  • We present an approach for acceleration actor-critic algorithm for reinforcement learning with continuous action space. Actor-critic algorithm has already proved its robustness to the infinitely large action spaces in various high dimensional environments. Despite that success, the main problem of the actor-critic algorithm remains the same-speed of convergence to the optimal policy. In high dimensional state and action space, a searching for the correct action in each state takes enormously long time. Therefore, in this paper we suggest a search accelerating function that allows to leverage speed of algorithm convergence and reach optimal policy faster. In our method, we assume that actions may have their own distribution of preference, that independent on the state. Since in the beginning of learning agent act randomly in the environment, it would be more efficient if actions were taken according to the some heuristic function. We demonstrate that heuristically-accelerated actor-critic algorithm learns optimal policy faster, using Educational Process Mining dataset with records of students' course learning process and their grades.