• Title/Summary/Keyword: A* Artificial Intelligence Game

Search Result 131, Processing Time 0.021 seconds

A Neural Network-based Artificial Intelligence Algorithm with Movement for the Game NPC (게임 NPC를 위한 신경망 기반의 이동 안공지능 알고리즘)

  • Joe, In-Whee;Choi, Moon-Won
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.12A
    • /
    • pp.1181-1187
    • /
    • 2010
  • This paper proposes a mobile AI (Artificial Intelligence) conducting decision-making in the game through education for intelligent character on the basis of Neural Network. Neural Network is learned through the input/output value of the algorithm which defines the game rule and the problem solving method. The learned character is able to perceive the circumstances and make proper action. In this paper, the mobile AI using Neural Network has been step-by-step designed, and a simple game has been materialized for its functional experiment. In this game, the goal, the character, and obstacles exist on regular 2D space, and the character, evading obstacles, has to move where the goal is. The mobile AI can achieve its goals in changing environment by learning the solution to several problems through the algorithm defined in each experiment. The defined algorithm and Neural Network are designed to make the input/output system the same. As the experimental results, the suggested mobile AI showed that it could perceive the circumstances to conduct action and to complete its mission. If mobile AI learns the defined algorithm even in the game of complex structure, its Neural Network will be able to show proper results even in the changing environment.

Comparison of Learning Performance by Reinforcement Learning Agent Visibility Information Difference (강화학습 에이전트 시야 정보 차이에 의한 학습 성능 비교)

  • Kim, Chan Sub;Jang, Si-Hwan;Yang, Seong-Il;Kang, Shin Jin
    • Journal of Korea Game Society
    • /
    • v.21 no.5
    • /
    • pp.17-28
    • /
    • 2021
  • Reinforcement learning, in which artificial intelligence develops itself to find the best solution to problems, is a technology that is highly valuable in many fields. In particular, the game field has the advantage of providing a virtual environment for problem-solving to reinforcement learning artificial intelligence, and reinforcement learning agents solve problems about their environment by identifying information about their situation and environment using observations. In this experiment, the instant dungeon environment of the RPG game was simplified and produced and various observation variables related to the field of view were set to the agent. As a result of the experiment, it was possible to figure out how much each set variable affects the learning speed, and these results can be referred to in the study of game RPG reinforcement learning.

Comparison of Reinforcement Learning Activation Functions to Improve the Performance of the Racing Game Learning Agent

  • Lee, Dongcheul
    • Journal of Information Processing Systems
    • /
    • v.16 no.5
    • /
    • pp.1074-1082
    • /
    • 2020
  • Recently, research has been actively conducted to create artificial intelligence agents that learn games through reinforcement learning. There are several factors that determine performance when the agent learns a game, but using any of the activation functions is also an important factor. This paper compares and evaluates which activation function gets the best results if the agent learns the game through reinforcement learning in the 2D racing game environment. We built the agent using a reinforcement learning algorithm and a neural network. We evaluated the activation functions in the network by switching them together. We measured the reward, the output of the advantage function, and the output of the loss function while training and testing. As a result of performance evaluation, we found out the best activation function for the agent to learn the game. The difference between the best and the worst was 35.4%.

Design of an Infant's App using AI for increasing Learning Effect (학습효과 증대를 위한 인공지능을 이용한 영유아 앱 설계)

  • Oh, Sun Jin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.733-738
    • /
    • 2020
  • It is really hard to find an infant's App, especially for the age under 5, even though there are lots of Apps developed and distributed nowadays. The selection of the proper infant's App is difficult since the infants' App should be useful, safe and helpful for the development of their intelligence. In this research, we design the useful infant's App for the development of their intelligence by applying the AI technology for increasing the learning effect in order to satisfy the characteristics of the infants' needs. A proposed App is the collection of interesting games for infants such as picture puzzle game, coloring shapes game, pasting stickers game, and fake mobile phone feature enables them to play interesting phone game. Furthermore, the proposed App is also designed to collect and analyze the log information generated while they are playing games, share and compare with other infants' log information to increase the learning effect. After then, it figures out and learns their game tendency, intelligibility, workmanship, and apply them to the next game in order to increase their interests and concentration of the game.

Intuitive Game Design as digital therapeutic tool for silver-generation

  • Hyein Kwon;Chan Lim
    • International Journal of Advanced Culture Technology
    • /
    • v.12 no.1
    • /
    • pp.305-310
    • /
    • 2024
  • The purpose of this study is to implement game content within the generative artificial intelligence module Chat-GPTs, grounded in the humanistic discourse of self-reflection. This content aims to empower the dignity of the silver generation, which has been marginalized by digital technology. Simultaneously, we intend to prototype a digital psychotherapeutic tool. The development of a flexible device that adapts to the silver generation's living environment and temporal constraints is also part of our plan. However, there are still few commercially available products, and digital therapeutics developed in the form of content are virtually nonexistent. The goal is to implement game content that allows the elderly, who have been marginalized by digital technology, to find their true dignity. Simultaneously, we plan to commercialize a prototype of digital psychotherapy that can flexibly adapt to the range of living environments and time constraints of the elderly. This study has been extended based on the game content 'Daily Run' created by Hyein Kwon, an undergraduate student at Kyungil University.

Automatic Map Generation without an Isolated Cave Using Cell Automata Enhanced by Binary Space Partitioning (이진 공간 분할로 보강된 셀 오토마타를 이용한 고립 동굴 없는 맵 자동 생성)

  • Kim, Ji-Min;Oh, Pyeong;Kim, Sun-Jeong;Hong, Seokmin
    • Journal of Korea Game Society
    • /
    • v.16 no.6
    • /
    • pp.59-68
    • /
    • 2016
  • Many researchers have paid attention to contents generation within the area of game artificial intelligence these days with various reasons. Efforts on automatic contents generation without game level designers' help were continuously progressed in various game contents. This study suggests an automatic map generation without an isolated cave using cellular automation enhanced by binary space partitioning(BSP). In other words, BSP makes it possible to specify the number of desired area and cellular automation reduces the time to search a path. Based upon our preliminary simulation results, we show the usefulness of our automatic map generation by applying the contents generation using cell automation, which is enhanced by BSP to games.

Flexible Development Architecture for Game NPC Intelligence to Support Load Sharing and Group Behavior (게임NPC지능 개발을 위한 부하분산과 그룹 행동을 지원하는 유연한 플랫폼 구조)

  • Im Cha-Seop;Kim Tae-Yong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.2 s.308
    • /
    • pp.40-51
    • /
    • 2006
  • As computer games become more complex and consumers demand more sophisticated computer controlled NPCs, developers are required to place a greater emphasis on the artificial intelligence aspects for their games. The platform for game NPC Intelligence Development should support real-time, independence, flexibility, group behavior, and various A.I to NPC that are reactive, realistic and easy to develop. This paper presents an architecture to satisfy these criteria for the platform of game NPC intelligence development. The proposed platform shows the higher performance than existing platform through the load sharing, and it also has some advantages which are supporting the various AI techniques, efficient group behavior, and independence to develop NPC intelligence.

Implementation of NPC Artificial Intelligence Using Agonistic Behavior of Animals (동물의 세력 투쟁 행동을 이용한 게임 인공 지능 구현)

  • Lee, MyounJae
    • Journal of Digital Convergence
    • /
    • v.12 no.1
    • /
    • pp.555-561
    • /
    • 2014
  • Artificial intelligence in the game is mainly used to determine patterns of behavior of NPC (Non Player Character) and the enemy, path finding. These artificial intelligence is implemented by FSM (Finite State Machine) or Flocking method. The number of NPC behavior in FSM method is limited by the number of FSM states. If the number of states is too small, then NPC player can know the behavior patterns easily. On the other hand, too many implementation cases make it complicated. The NPC behaviors in Flocking method are determined by the leader's decision. Therefore, players can know easily direction of movement patterns or attack pattern of NPCs. To overcome these problem, this paper proposes agonistic behaviors(attacks, threats, showing courtesy, avoidance, submission)in animals to apply for the NPC, and implements agonistic behaviors using Unity3D engine. This paper can help developing a real sense of the NPC artificial intelligence.

A Cooperation Strategy of Multi-agents in Real-Time Dynamic Environments (실시간 동적인 환경에서 다중 에이전트의 협동 기법)

  • Yoo, Han-Ha;Cho, Kyung-Eun;Um, Ky-Hyun
    • Journal of Korea Game Society
    • /
    • v.6 no.3
    • /
    • pp.13-22
    • /
    • 2006
  • Games such as sports, RTS, RPG, which teams of players play, require advanced artificial intelligence technology for team management. The existing artificial intelligence enables an intelligent agent to have the autonomy solving problem by itself, but to lack interaction and cooperation between agents. This paper presents "Level Unified Approach Method" with effective role allocation and autonomy in multiagent system. This method allots sub-goals to agents using role information to accomplish a global goal. Each agent makes a decision and takes actions by itself in dynamic environments. Global goal of Team coordinates to allocated role in tactics approach. Each agent leads interactive cooperation by sharing state information with another using Databoard, As each agent has planning capacity, an agent takes appropriate actions for playing allocated roles in dynamic environments. This cooperation and interactive operation between agents causes a collision problem, so it approaches at tactics side for controlling this problem. Our experimental result shows that "Level Unified Approach Method" has better performance than existing rental approach method or de-centralized approach method.

  • PDF

An Artificial Intelligence Game Agent Using CNN Based Records Learning and Reinforcement Learning (CNN 기반 기보학습 및 강화학습을 이용한 인공지능 게임 에이전트)

  • Jeon, Youngjin;Cho, Youngwan
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1187-1194
    • /
    • 2019
  • This paper proposes a CNN architecture as value function network of an artificial intelligence Othello game agent and its learning scheme using reinforcement learning algorithm. We propose an approach to construct the value function network by using CNN to learn the records of professional players' real game and an approach to enhance the network parameter by learning from self-play using reinforcement learning algorithm. The performance of value function network CNN was compared with existing ANN by letting two agents using each network to play games each other. As a result, the winning rate of the CNN agent was 69.7% and 72.1% as black and white, respectively. In addition, as a result of applying the reinforcement learning, the performance of the agent was improved by showing 100% and 78% winning rate, respectively, compared with the network-based agent without the reinforcement learning.