• Title/Summary/Keyword: Game Performance Evaluation

Search Result 66, Processing Time 0.034 seconds

Performance Evaluation of IOCP Game Server and Game Variable Obfuscation Program (IOCP 게임 서버 및 게임 변수 난독화 프로그램 성능 평가)

  • Cha, Eun-Sang;Kim, Youngsik
    • Journal of Korea Game Society
    • /
    • v.19 no.6
    • /
    • pp.71-82
    • /
    • 2019
  • This paper analyzes performance difference between Unreal Engine's built-in network solution and IOCP server. To do this, we developed IOCP server and 3D game with Unreal Engine 4. Also we considered the game variable obfuscation program to prevent the modification of the memory of the code-modulated game hacking program. This paper used SCUE4 Anti-Cheat Solution, which is Unreal Engine's solution, to study preventing memory modification and to analyze performance trade-offs.

An Implementation of the Game Mechanics Simulator (게임메카닉스 시뮬레이터 구현)

  • Chang, Hee-Dong
    • The KIPS Transactions:PartB
    • /
    • v.12B no.5 s.101
    • /
    • pp.595-606
    • /
    • 2005
  • The scale of game development are rapidly increasing as the blockbuster games which cost $7\~20$ billion won, often appear on the markets. The game mechanics which is concentrated on technological elements of the game, necessarily requires the management of quality. In this paper, we propose a computer simulator for the quality evaluation of game mechanics which can analyze the quality accurately and economically in the design phase. The proposed simulator provides Petri net[7,8] and Smalltalk[9] for convenient modeling. The simulator gives the realistic evaluation like play test because it uses the realistic data of gameplay environment such as player action-pattern, game world map, and item DB but the previous evaluation methods can not consider the realistic gameplay environment and can only cover a limited scope of evaluation. To prove good performance of the proposed simulator, we have 80 simulations for the quality evaluation of the game mechanics of Dungeon & Dragon[13,14] in a given world map. The simulation results show that the proposed simulator can evaluate the faultlessness, optimization, and play balance of the game mechanics and gives better good performance than other evaluation methods.

Performance Evaluation of Synchronization Algorithms for Multi-play Real-Time Strategy Simulation Games (멀티플레이 실시간 전략 시뮬레이션 게임을 위한 동기화 알고리즘들의 성능 평가)

  • Min Seok Kang;Kyung Sik Kim;Sam Kweon Oh
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.11a
    • /
    • pp.1280-1283
    • /
    • 2008
  • The network performance of MOGs(Multiplayer Online Games) can be measured by the amount of network loads and the response times on user inputs. This paper introduces a frame locking algorithm and a game turn algorithm that have been used for game synchronization in the area of RTS(Real-time Strategy Simulation) Games, a kind of MOG; the results of performance evaluation of these two algorithms are also given. In addition, a server architecture for MOG servers in which replacing synchronization algorithms can be done easily for pursuing efficient performance evaluation, is also introduced.

The Development of Two-Person Janggi Board Game Using Backpropagation Neural Network and Reinforcement Learning (역전파 신경회로망과 강화학습을 이용한 2인용 장기보드게임 개발)

  • Park, In-Kue;Jung, Kwang-Ho
    • Journal of Korea Game Society
    • /
    • v.1 no.1
    • /
    • pp.61-67
    • /
    • 2001
  • This paper describes a program which learns good strategies for two-poison, deterministic, zero-sum board games of perfect information. The program learns by simply playing the game against either a human or computer opponent. The results of the program's teaming of a lot of games are reported. The program consists of search kernel and a move generator module. Only the move generator is modified to reflect the rules of the game to be played. The kernel uses a temporal difference procedure combined with a backpropagation neural network to team good evaluation functions for the game being played. Central to the performance of the program is the search procedure. This is a the capture tree search used in most successful janggi playing programs. It is based on the idea of using search to correct errors in evaluations of positions. This procedure is described, analyzed, tested, and implemented in the game-teaming program. Both the test results and the performance of the program confirm the results of the analysis which indicate that search improves game playing performance for sufficiently accurate evaluation functions.

  • PDF

A case study of model for playability and effectiveness analysis of serious games (기능성게임의 게임성과 효과성 분석 모델 사례 연구)

  • Yoon, Taebok;Kim, Min Chul
    • Journal of Korea Game Society
    • /
    • v.18 no.6
    • /
    • pp.111-120
    • /
    • 2018
  • Serious games are developed not only as a means of enjoying fun, but also as a special purpose in various fields such as education, medicine, public relations, and management. Such a serious game should have the effect of meeting the specific purpose with the game of the general game. However, it is difficult to show both game performance and effectiveness. This study examines game performance and effectiveness of serious game, and suggests a method of finding model of superior serious game. In the experiment, we examined the game performance of some commercial and general games using the proposed method and confirmed the meaningful results.

Comparison of Reinforcement Learning Activation Functions to Improve the Performance of the Racing Game Learning Agent

  • Lee, Dongcheul
    • Journal of Information Processing Systems
    • /
    • v.16 no.5
    • /
    • pp.1074-1082
    • /
    • 2020
  • Recently, research has been actively conducted to create artificial intelligence agents that learn games through reinforcement learning. There are several factors that determine performance when the agent learns a game, but using any of the activation functions is also an important factor. This paper compares and evaluates which activation function gets the best results if the agent learns the game through reinforcement learning in the 2D racing game environment. We built the agent using a reinforcement learning algorithm and a neural network. We evaluated the activation functions in the network by switching them together. We measured the reward, the output of the advantage function, and the output of the loss function while training and testing. As a result of performance evaluation, we found out the best activation function for the agent to learn the game. The difference between the best and the worst was 35.4%.

Evaluation of weights to get the best move in the Gonu game (고누게임에서 최선의 수를 구하기 위한 가중치의 평가)

  • Shin, Yong-Woo
    • Journal of Korea Game Society
    • /
    • v.18 no.5
    • /
    • pp.59-66
    • /
    • 2018
  • In this paper, one of the traditional game, Gonu game, is implemented and experimented. The Minimax algorithm was applied as a technique to implement the Gonu game. We proposed an evaluation function to implement game in Minimax algorithm. We analyze the efficiency of algorithm for alpha beta pruning to improve the performance after implementation of Gonu game. Weights were analyzed for optimal analysis that affected the win or loss of the game. For the weighting analysis, a competition of human and computer was performed. We also experimented with computer and computer. As a result, we proposed a weighting value for optimal attack and defense.

A DEA-Based Portfolio Model for Performance Management of Online Games (DEA 기반 온라인 게임 성과 관리 포트폴리오 모형)

  • Chun, Hoon;Lee, Hakyeon
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.39 no.4
    • /
    • pp.260-270
    • /
    • 2013
  • This paper proposes a strategic portfolio model for managing performance of online games. The portfolio matrix is composed of two dimensions: financial performance and non-financial performance. Financial performance is measured by the conventional measure, average revenue per user (ARPU). In terms of non-financial performance, five non-financial key performance indicators (KPIs) that have been widely used in the online game industry are utilized: RU (Register User), VU (Visiting User), TS (Time Spent), ACU (Average Current User), MCU (Maximum Current User). Data envelopment analysis (DEA) is then employed to produce a single performance measure aggregating the five KPIs. DEA is a linear programming model for measuring the relative efficiency of decision making unit (DMUs) with multiple inputs and outputs. This study employs DEA as a tool for multiple criteria decision making (MCDM), in particular, the pure output model without inputs. Combining the two types of performance produces the online game portfolio matrix with four quadrants: Dark Horse, Stop Loss, Jack Pot, Luxury Goods. A case study of 39 online games provided by company 'N' is provided. The proposed portfolio model is expected to be fruitfully used for strategic decision making of online game companies.

Rendering Performance Evaluation of 3D Games with Interior Mapping (Interior Mapping이 적용된 3D 게임의 렌더링 성능 평가)

  • Lee, Jae-Won;Kim, Youngsik
    • Journal of Korea Game Society
    • /
    • v.19 no.6
    • /
    • pp.49-60
    • /
    • 2019
  • Interior Mapping has been used to reduce graphics resources. In this paper, rendering speed(FPS), the number of polygons, shader complexity and each resource size of Interior Mapping were compared to those of actual modeling in order to examine the performance of 3D games when the technology is adapted by utilizing Unreal Engine 4. In addition, for the efficient application, the difference in performance according to the resolution and detail of cube map texture was verified.

An Implementation of Othello Game Player Using ANN based Records Learning and Minimax Search Algorithm (ANN 기반 기보학습 및 Minimax 탐색 알고리즘을 이용한 오델로 게임 플레이어의 구현)

  • Jeon, Youngjin;Cho, Youngwan
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.12
    • /
    • pp.1657-1664
    • /
    • 2018
  • This paper proposes a decision making scheme for choosing the best move at each state of game in order to implement an artificial intelligence othello game player. The proposed decision making scheme predicts the various possible states of the game when the game has progressed from the current state, evaluates the degree of possibility of winning or losing the game at the states, and searches the best move based on the evaluation. In this paper, we generate learning data by decomposing the records of professional players' real game into states, matching and accumulating winning points to the states, and using the Artificial Neural Network that learned them, we evaluated the value of each predicted state and applied the Minimax search to determine the best move. We implemented an artificial intelligence player of the Othello game by applying the proposed scheme and evaluated the performance of the game player through games with three different artificial intelligence players.