Browse > Article
http://dx.doi.org/10.7583/JKGS.2016.16.2.27

Stealthy Behavior Simulations Based on Cognitive Data  

Choi, Taeyeong (School of Computing, Informatics, Decision Systems Engineering, Arizona State University)
Na, Hyeon-Suk (School of Computer Science and Engineering, Soongsil University)
Abstract
Predicting stealthy behaviors plays an important role in designing stealth games. It is, however, difficult to automate this task because human players interact with dynamic environments in real time. In this paper, we present a reinforcement learning (RL) method for simulating stealthy movements in dynamic environments, in which an integrated model of Q-learning with Artificial Neural Networks (ANN) is exploited as an action classifier. Experiment results show that our simulation agent responds sensitively to dynamic situations and thus is useful for game level designer to determine various parameters for game.
Keywords
Reinforcement learning; Artificial neural network; Game level design; Game simulation;
Citations & Related Records
연도 인용수 순위
  • Reference
1 D.C. Dennett, The intentional stance. Cambridge, MA: MIT Press, 1987.
2 G. Gergely, Z. Nadasdy, G. Csibra., and S. Biro., Taking the intentional stance at 12 months of age., Cognition, Vol. 56, pp. 165- 193, 1995.   DOI
3 Y. Shi and R. Crawfis, Optimal Cover Placement against Static Enemy Positions, in Proc. of the 8th International Conference on Foundations of Digital Games (FDG), pp. 109-116, 2013.
4 J. Tremblay, P.A. Torres, N. Rikovitch, and C. Verbrugge, An Exploration Tool For Predicting Stealthy Behaviour, in Proc. of the 2013 AIIDE workshop on Artificial Intelligence in the Game Design Process, 2013.
5 J. Tremblay, P.A. Torres, and C. Verbrugge, Measuring Risk in Stealth Games, in Proc. of the 9th International conference on foundations of Digital Games (FDG), 2014.
6 Q. Xu, J. Tremblay, and C. Verbrugge, Generative Methods for Guard and Camera Placement in Stealth Games, in Proc. of the Tenth Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2014.
7 Q. Xu, J. Tremblay, and C. Verbrugge, Procedural Guard Placement for Stealth Games, in Proc. of the 5th workshop on Procedural Content Generation (PCG), 2014.
8 J. Tremblay, P.A. Torres, and C. Verbrugge, An Algorithmic Approach to Analyzing Combat and Stealth Games, in Proc. of the International Conference on Computational Intelligence and Games (CIG), 2014.
9 B.Q. Huang, G. Y. Cao, and M. Guo, Reinforcement Learning Neural Network to the Problem of Autonomous Mobile Robot Obstacle Avoidance, in Proc. of 2005 International Conference on Machine Learning and Cybernetics (ICMLC), 2005
10 M. Humphrys, Action Selection Methods using Reinforcement Learning, in PhD Thesis, University of Cambridge, 1997.
11 J. Togelius, S. Karakovskiy, J. Koutnk, and J. Schmidhuber, Super mario evolution, in Proc. of the IEEE Symposium on Computational Intelligence and Games (CIG), pp. 156-161, 2009.
12 Z. Buk, J. Koutnik, and M. snorek, NEAT in HyperNEAT substituted with genetic programming, in Proc. of the International Conference on Adaptive and Natural Computing Algorithms (IICANNGA), 2009.
13 V. Minih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, Playing Atari with Deep Reinforcement Learning, in Neural Information Processing Systems (NIPS) Deep Learning Workshop, 2013.
14 M.G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling, The arcade learning environment: An evaluation platform for general agents, in Jounal of Artificial Intelligence Research (JAIR), Vol. 47, pp. 253-279, 2013.   DOI
15 M.J.L. Boada, R. Barber, and M.A. Salichs, Visual Approach Skill for a Mobile Robot using Learning and Fusion of Simple Skills, in Robotics and Autonomous Systems, Vol. 38, pp. 157-170, 2002.   DOI
16 C.J.C.H. Watkins and P. Dayan, Q-Learning, in Machine Learning, Vol. 8, pp. 279-292, 1992.
17 A. Onat, Q-learning with recurrent neural networks as a controller for the inverted pendulum problem, in Proc. of the Fifth International Conference on Neural Inforamtion, pp. 837-840, 1998.
18 L.J. Lin, Reinforcement Learning for Robots using Neural Networks, in PhD thesis, Carnegie Mellon University, School of Computer Science, 1993.
19 E. Cervera and A.P.D. Pobil, Sensor-based Learning for Practical Planning of fine Motions in Robotics, in Information Sciences, Vol. 145, pp. 147-168, 2002.   DOI
20 R. Gavin and N. Mahesan, On-line Q-learning using Connectionist systems, in Technical Report, No. 166, University of Cambridge Engineering Department, 1994.