DOI QR코드

DOI QR Code

고스트들의 협력전술에 의한 팩맨게임 난이도 제고

Making Levels More Challenging with a Cooperative Strategy of Ghosts in Pac-Man

  • Choi, Taeyeong (School of Computer Science and Engineering, Soongsil University) ;
  • Na, Hyeon-Suk (School of Computer Science and Engineering, Soongsil University)
  • 투고 : 2015.07.28
  • 심사 : 2015.09.10
  • 발행 : 2015.10.20

초록

NPC, 특히 적 캐릭터들의 인공지능은 게임의 설계 단계에 있어 난이도를 조절하기 위해 핵심적인 요소이다. 지능적인 적들은 게임을 보다 도전적으로 만들 뿐 아니라, 동일한 게임 환경에서도 유저들에게 다양한 경험을 제공할 수 있다. 오늘날 대부분의 게임 유저들은 다수의 적들과 상호작용을 하기 때문에, 적 캐릭터들의 협업을 제어하는 것은 이전 어느 때보다 그 중요성이 크다고 할 수 있다. 본 연구는 팩맨 게임의 적 인공지능에 구현될 수 있는 A* 알고리즘 기반의 협력전술을 제안한다. 17명의 피실험자로부터 얻은 설문 결과는 제안된 협력전술을 따르는 적으로 구성된 레벨이, 기존 팩맨게임에서의 적들 또는 비협력적인 적들로 구성된 레벨들보다 더 어렵고 흥미로웠음을 보여준다.

The artificial intelligence (AI) of Non-Player Companions (NPC), especially opponents, is a key element to adjust the level of games in game design. Smart opponents can make games more challenging as well as allow players for diverse experiences, even in the same game environment. Since game users interact with more than one opponent in most of today's games, collaboration control of opponent characters becomes more important than ever before. In this paper, we introduce a cooperative strategy based on the A* algorithm for enemies' AI in the Pac-Man game. A survey from 17 human testers shows that the levels with our collaborative opponents are more difficult but interesting than those with either the original Pac-Man's personalities or the non-cooperative greedy opponents.

키워드

참고문헌

  1. P. Stone and M. Veloso, "Multiagent systems: A survey from a machine learning perspective", Autonomous Robots, Vol. 8, No. 3, pp. 345-383, 2000. https://doi.org/10.1023/A:1008942012299
  2. M. Benda, V. Jagannathan, and R. Dodhiawalla, "On optimal cooperation of knowledge sources", Technical Report No.BCS-G2010-28, Boeing Advanced Technology Center, 1986.
  3. L.M. Sephens and M.B. Merx, "The effect of agent control strategy on the performance of a DAI pursuit problem", In Proceeding of the the 10th International Workshop on Distributed Artificial Intelligence, 1990.
  4. T. Haynes and S. Sen, "Evolving behaviora1 strategies in predators and prey", Adaptation and Leaming in Multiagent System, Springer Verlag, Berlin, pp.113-126, 1996.
  5. Y. Ishiwaka, T. Sato, and Y. Kakazu, "An approach to the pursuit problem on a heterogeneous multiagent system using reinforcement learning", Elsevier Journal on Robotics and Autonomous Systems, Vol. 43, No. 4, pp. 245-256, 2003. https://doi.org/10.1016/S0921-8890(03)00040-X
  6. Hyong-Ill Lee and Byung-Cheon Kim, "Multiagent Control Strategy Using Reinforcement Learning", The KIPS Transactions: PartB, Vol. 10B, No. 3, pp. 249-256, 2003. https://doi.org/10.3745/KIPSTB.2003.10B.3.249
  7. D. Xiao and A. Tan, "Cooperative cognitive agents and reinforcement learning in pursuit game", In Proceedings of 3rd Int'l Conference on Computational Intelligence, Robotics and Autonomous Systems (CIRAS'05), 2005.
  8. C. Undeger and F. Polat, "Multi-agent real-time pursuit", Autonomous Agents and Multi-Agent Systems, Vol. 21, No. 1, pp. 69-107, 2010. https://doi.org/10.1007/s10458-009-9102-0
  9. K. Q. Nguyen and R. Thawonmas, "Monte carlo tree search for collaboration control of ghosts in Ms. Pac-Man", IEEE Transactions on Computational Intelligence and AI in Games, Vol 5, No. 1, pp. 57-68, 2013. https://doi.org/10.1109/TCIAIG.2012.2214776