• Title/Summary/Keyword: Game Agent

Search Result 152, Processing Time 0.04 seconds

Deep Q-Network based Game Agents (심층 큐 신경망을 이용한 게임 에이전트 구현)

  • Han, Dongki;Kim, Myeongseop;Kim, Jaeyoun;Kim, Jung-Su
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.3
    • /
    • pp.157-162
    • /
    • 2019
  • The video game Tetris is one of most popular game and it is well known that its game rule can be modelled as MDP (Markov Decision Process). This paper presents a DQN (Deep Q-Network) based game agent for Tetris game. To this end, the state is defined as the captured image of the Tetris game board and the reward is designed as a function of cleared lines by the game agent. The action is defined as left, right, rotate, drop, and their finite number of combinations. In addition to this, PER (Prioritized Experience Replay) is employed in order to enhance learning performance. To train the network more than 500000 episodes are used. The game agent employs the trained network to make a decision. The performance of the developed algorithm is validated via not only simulation but also real Tetris robot agent which is made of a camera, two Arduinos, 4 servo motors, and artificial fingers by 3D printing.

Bargaining Game using Artificial agent based on Evolution Computation (진화계산 기반 인공에이전트를 이용한 교섭게임)

  • Seong, Myoung-Ho;Lee, Sang-Yong
    • Journal of Digital Convergence
    • /
    • v.14 no.8
    • /
    • pp.293-303
    • /
    • 2016
  • Analysis of bargaining games utilizing evolutionary computation in recent years has dealt with important issues in the field of game theory. In this paper, we investigated interaction and coevolution process among heterogeneous artificial agents using evolutionary computation in the bargaining game. We present three kinds of evolving-strategic agents participating in the bargaining games; genetic algorithms (GA), particle swarm optimization (PSO) and differential evolution (DE). The co-evolutionary processes among three kinds of artificial agents which are GA-agent, PSO-agent, and DE-agent are tested to observe which EC-agent shows the best performance in the bargaining game. The simulation results show that a PSO-agent is better than a GA-agent and a DE-agent, and that a GA-agent is better than a DE-agent with respect to co-evolution in bargaining game. In order to understand why a PSO-agent is the best among three kinds of artificial agents in the bargaining game, we observed the strategies of artificial agents after completion of game. The results indicated that the PSO-agent evolves in direction of the strategy to gain as much as possible at the risk of gaining no property upon failure of the transaction, while the GA-agent and the DE-agent evolve in direction of the strategy to accomplish the transaction regardless of the quantity.

Comparison of Reinforcement Learning Activation Functions to Improve the Performance of the Racing Game Learning Agent

  • Lee, Dongcheul
    • Journal of Information Processing Systems
    • /
    • v.16 no.5
    • /
    • pp.1074-1082
    • /
    • 2020
  • Recently, research has been actively conducted to create artificial intelligence agents that learn games through reinforcement learning. There are several factors that determine performance when the agent learns a game, but using any of the activation functions is also an important factor. This paper compares and evaluates which activation function gets the best results if the agent learns the game through reinforcement learning in the 2D racing game environment. We built the agent using a reinforcement learning algorithm and a neural network. We evaluated the activation functions in the network by switching them together. We measured the reward, the output of the advantage function, and the output of the loss function while training and testing. As a result of performance evaluation, we found out the best activation function for the agent to learn the game. The difference between the best and the worst was 35.4%.

A Study about the Usefulness of Reinforcement Learning in Business Simulation Games using PPO Algorithm (경영 시뮬레이션 게임에서 PPO 알고리즘을 적용한 강화학습의 유용성에 관한 연구)

  • Liang, Yi-Hong;Kang, Sin-Jin;Cho, Sung Hyun
    • Journal of Korea Game Society
    • /
    • v.19 no.6
    • /
    • pp.61-70
    • /
    • 2019
  • In this paper, we apply reinforcement learning in the field of management simulation game to check whether game agents achieve autonomously given goal. In this system, we apply PPO (Proximal Policy Optimization) algorithm in the Unity Machine Learning (ML) Agent environment and the game agent is designed to automatically find a way to play. Five game scenario simulation experiments were conducted to verify their usefulness. As a result, it was confirmed that the game agent achieves the goal through learning despite the change of environment variables in the game.

A Design of a Coordination Agent Controlling Decision with Each Other Agents in RTS (RTS 게임에서 에이전트와 상호 의사를 조절하는 조정 에이전트의 설계)

  • Park, Jin-Young;Sung, Yun-Sick;Cho, Kyung-Eun;Um, Ky-Hyun
    • Journal of Korea Game Society
    • /
    • v.9 no.5
    • /
    • pp.117-125
    • /
    • 2009
  • In real-time strategy simulation (RTS) game each team is composed of agents and executes strategies to win other team. Strategy needs agents' cooperation in each team. This needs multi-agent system (MAS). Centralized decision making, one of decision making in MAS, selects actions not to agents but to team by a coordinated agent. Decentralized decision making costs high because each agent communicates with each other. In this paper we propose a system which controls agents by grouping and allocates roles through negotiation by a coordinated agent. Then, when one of allocated actions is not executed or failed, a coordinated agent allocates its role to another agent. We make experiments in starcraft, famous RTS game. When a proposed method is applied, the performance of attack and defense is increased. The improved agents' team wins eight times per ten games.

  • PDF

Card Battle Game Agent Based on Reinforcement Learning with Play Level Control (플레이 수준 조절이 가능한 강화학습 기반 카드형 대전 게임 에이전트)

  • Yong Cheol Lee;Chill woo Lee
    • Smart Media Journal
    • /
    • v.13 no.2
    • /
    • pp.32-43
    • /
    • 2024
  • Game agents which are behavioral agent for game playing are a crucial component of game satisfaction. However it takes a lot of time and effort to create game agents for various game levels, environments, and players. In addition, when the game environment changes such as adding contents or updating characters, new game agents need to be developed and the development difficulty gradually increases. And it is important to have a game agent that can be customized for different levels of players. This is because a game agent that can play games of various levels is more useful and can increase the satisfaction of more players than a high-level game agent. In this paper, we propose a method for learning and controlling the level of play of game agents that can be rapidly developed and fine-tuned for various game environments and changes. At this time, reinforcement learning applies a policy-based distributed reinforcement learning method IMPALA for flexible processing and fast learning of various behavioral structures. Once reinforcement learning is complete, we choose actions by sampling based on Softmax-Temperature method. From this result, we show that the game agent's play level decreases as the Temperature value increases. This shows that it is possible to easily control the play level.

Analyzing the Online Game User's Game Item Transacting Behaviors by Using Fuzzy Logic Agent-Based Modeling Simulation (온라인 게임 사용자의 게임 아이템 거래 행동 특성 분석을 위한 퍼지논리 에이전트 기반 모델링 시뮬레이션)

  • Min Kyeong Kim;Kun Chang Lee
    • Information Systems Review
    • /
    • v.23 no.1
    • /
    • pp.1-22
    • /
    • 2021
  • This study aims to analyze online game user's game items transacting behaviors for the two game genres such as MMORPG and sports game. For the sake of conducting the analysis, we adopted a fuzzy logic agent-based modeling. In the online game fields, game items transactions are crucial to game company's profitability. However, there are lack of previous studies investigating the online game user's game items transacting activities. Since many factors need to be addressed in a complicated way, ABM (agent-based modeling) simulation mechanism is adopted. Besides, a fuzzy logic is also considered due to the fact that a number of uncertainties and ambiguities exist with respect to online game user's complex behaviors in transacting game items. Simulation results from applying the fuzzy logic ABM method revealed that MMORPG game users are motivated to pay expensive price for high-performance game items, while sports game users tend to transact game items within a reasonable price range. We could conclude that the proposed fuzzy logic ABM simulation mechanism proved to be very useful in organizing an effective strategy for online game items management and customers retention.

Intelligent Vocabulary Recommendation Agent for Educational Mobile Augmented Reality Games (교육용 모바일 증강현실 게임을 위한 지능형 어휘 추천 에이전트)

  • Kim, Jin-Il
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.2
    • /
    • pp.108-114
    • /
    • 2019
  • In this paper, we propose an intelligent vocabulary recommendation agent that automatically provides vocabulary corresponding to game-based learners' needs and requirements in the mobile education augmented reality game environment. The proposed agent reflects the characteristics of mobile technology and augmented reality technology as much as possible. In addition, this agent includes a vocabulary reasoning module, a single game vocabulary recommendation module, a battle game vocabulary recommendation module, a learning vocabulary list Module, and a thesaurus module. As a result, game-based learners' are generally satisfied. The precision of context vocabulary reasoning and thesaurus is 4.01 and 4.11, respectively, which shows that vocabulary related to situation of game-based learner is extracted. However, In the case of satisfaction, battle game vocabulary(3.86) is relatively low compared to single game vocabulary(3.94) because it recommends vocabulary that can be used jointly among recommendation vocabulary of individual learners.

An Optimization Strategy of Task Allocation using Coordination Agent (조정 에이전트를 이용한 작업 할당 최적화 기법)

  • Park, Jae-Hyun;Um, Ky-Hyun;Cho, Kyung-Eun
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.93-104
    • /
    • 2007
  • In the complex real-time multi-agent system such as game environment, dynamic task allocations are repeatedly performed to achieve a goal in terms of system efficiency. In this research, we present a task allocation scheme suitable for the real-time multi-agent environment. The scheme is to optimize the task allocation by complementing existing coordination agent with $A^*$ algorithm. The coordination agent creates a status graph that consists of nodes which represent the combinations of tasks and agents, and refines the graph to remove nodes of non-execution tasks and agents. The coordination agent performs the selective utilization of the $A^*$ algorithm method and the greedy method for real-time re-allocation. Then it finds some paths of the minimum cost as optimized results by using $A^*$ algorithm. Our experiments show that the coordination agent with $A^*$ algorithm improves a task allocation efficiency about 25% highly than the coordination agent only with greedy algorithm.

  • PDF

Implementation of Crowd Behavior of Pedestrain based AB and CA mathematical model in Intelligent Game Environment (게임환경에서 AB 와 CA 수학모델을 이용한 보행자들의 집단행동 구현)

  • Kim, Seongdong;Kim, Jonghyun
    • Journal of Korea Game Society
    • /
    • v.19 no.6
    • /
    • pp.5-14
    • /
    • 2019
  • In this paper, we propose a modeling and simulation of group behavioral movement of pedestrians using Agent based and Cellular Automata model in intelligent game environment. The social behaviors of the crowds are complex and important, and based on this, the prototype game-model was implemented to show the crowd interaction on AB and CA in the game environment. Our experiment revealed the promise of group behaviour as a cost-efficient, yet accurate platform for researching crowd behaviour in risk situations with real models.