DOI QR코드

DOI QR Code

DeepPurple : Chess Engine using Deep Learning

딥퍼플 : 딥러닝을 이용한 체스 엔진

  • 김성환 (한성대학교 컴퓨터공학과) ;
  • 김영웅 (한성대학교 컴퓨터공학부)
  • Received : 2017.08.16
  • Accepted : 2017.10.13
  • Published : 2017.10.31

Abstract

In 1997, IBM's DeepBlue won the world chess championship, Garry Kasparov, and recently, Google's AlphaGo won all three games against Ke Jie, who was ranked 1st among all human Baduk players worldwide, interest in deep running has increased rapidly. DeepPurple, proposed in this paper, is a AI chess engine based on deep learning. DeepPurple Chess Engine consists largely of Monte Carlo Tree Search and policy network and value network, which are implemented by convolution neural networks. Through the policy network, the next move is predicted and the given situation is calculated through the value network. To select the most beneficial next move Monte Carlo Tree Search is used. The results show that the accuracy and the loss function cost of the policy network is 43% and 1.9. In the case of the value network, the accuracy is 50% and the loss function cost is 1, respectively.

1997년 IBM의 딥블루가 세계 체스 챔피언인 카스파로프를 이기고, 최근 구글의 알파고가 중국의 커제에게 완승을 거두면서 딥러닝에 대한 관심이 급증하였다. 본 논문은 딥러닝에 기반을 둔 인고지능 체스엔진인 딥퍼플(DeepPurple) 개발에 대해 기술한다. 딥퍼플 체스엔진은 크게 몬테카를로 트리탐색과 컨볼루션 신경망으로 구현된 정책망 및 가치망으로 구성되어 있다. 딥러닝을 통해 구축된 정책망을 통해 다음 수를 예측하고, 가치망을 통해 주어진 상황에서의 판세를 계산한 후, 몬테카를로 트리탐색을 통해 가장 유리한 수를 선택하는 것이 기본 원리이다. 학습 결과, 정책망의 경우 정확도 43%, 손실함수 비용 1,9로 나타났으며, 가치망의 경우 정확도 50%, 손실함수 비용 1점대에서 진동하는 것으로 나타났다.

Keywords

References

  1. Clark, Christopher and Storkey, Amos. "Teaching deep convolutional neural networks to play Go", arXiv preprint arXiv:1412.3409, 2014.
  2. Browne, C. B., Powley, E., Whitehouse, D., Lucas, S. M., Cowling, P. I., Rohlfshagen, P.,Tavener, S., Perez, D., Samothrakis, S., & Colton, S. "A survey of Monte Carlo tree search methods". IEEE Transactions on Computational Intelligence and AI in Games, Vol. 4 No. 1, pp.1-43, 2012. DOI: https://doi.org/10.1109/TCIAIG.2012.2186810
  3. Barak Oshri and Nishith Khandwala, "Predicting Moves in Chess using Convolution Neural Networks", http://github.com/BarakOshiri/ConvChess
  4. Matthew Lai., "Giraffe: Using Deep Reinforcement Learning to Play Chess", arXiv:1509.01549v2, 2015.
  5. https://en.wikipedia.org/wiki/Elo_rating_system
  6. Jonathan Baxter, Andrew Tridgell, and Lex Weaver "TDLeaf($\lambda$) Combining Temporal Difference Learning with Game-Tree Search"Australian Journal of Intelligent Information Processing Systems, 1998.
  7. https://en.wikipedia.org/wiki/FIDE_World_ Rankings.
  8. https://www.unrealengine.com/ko/what-isunreal-engine-4.
  9. http://www.kingbase-chess.net/
  10. https://en.wikipedia.org/wiki/Forsyth_Edwards_Notation